Entries Tagged 'wetware' ↓

10x more selective

There's this common notion of "10x programmers" who are 10x more productive than the average programmer. We can't quantify productivity so we don't know if it's true. But definitely, enough people appear unusually productive to sustain the "10x programmer" notion.

How do they do it?

People often assume that 10x more productivity results from 10x more aptitude or 10x more knowledge. I don't think so. Now I'm not saying aptitude and knowledge don't help. But what I've noticed over the years is that the number one factor is 10x more selectivity. The trick is to consistently avoid shit work.

And by shit work, I don't necessarily mean "intellectually unrewarding". Rather, the definition of shit work is that its output goes down the toilet.

I've done quite a lot of shit work myself, especially when I was inexperienced and gullible. (One of the big advantages of experience is that one becomes less gullible that way – which more than compensates for much of the school knowledge having faded from memory.)

Let me supply you with a textbook example of hard, stimulating, down-the-toilet-going work: my decade-old adventures with fixed point.

You know what "fixed point arithmetic" is? I'll tell you. It's when you work with integers and pretend they're fractions, by implicitly assuming that your integer x actually represents x/2^N for some value of N.

So to add two numbers, you just do x+y. To multiply, you need to do x*y>>N, because plain x*y would represent x*y/2^2N, right? You also need to be careful so that this shit doesn't overflow, deal with different Ns in the same expression, etc.

Now in the early noughties, I was porting software to an in-house chip which was under development. It wasn't supposed to have hardware floating point units – "we'll do everything in fixed point".

Here's a selection of things that I did:

  • There was a half-assed C++ template class called InteliFixed<N> (there still is; I kid you not). I put a lot of effort into making it, erm, full-assed (what's the opposite of half-assed?) This included things like making operator+ commutative when it gets two fixed point numbers of different types (what's the type of the result?); making sure the dreadful inline assembly implementing 64-bit intermediate multiplications inlines well; etc. etc.
  • My boss told me to keep two versions of the code – one using floating point, for the noble algorithm developers, and one using fixed point, for us grunt workers fiddling with production code. So I manually kept the two in sync.
  • My boss also told me to think of a way to run some of the code in float, some not, to help find precision bugs. So I wrote a heuristic C++ parser that automatically merged the two versions into one. It took some functions from the "float" version and others from the "fixed" version, based on a header-file-like input telling it what should come from which version.
  • Of course this merged shit would not run or even compile just like that, would it? So I implemented macros where you'd pass to functions, instead of vector<float>&, a REFERENCE(vector<float>), and a horrendous bulk of code making this work at runtime when you actually passed a vector<InteliFixed> (which the code inside the function then tried to treat as a vector<float>.)
  • And apart from all that meta-programming, there was the programming itself of course. For example, solving 5×5 equation systems to fit polynomials to noisy data points, in fixed point. I managed to get this to work using hideous normalization tricks and assembly code using something like 96 bits of integer precision. My code even worked better than single-precision floating point without normalization! Yay!

For months and months, I worked as hard as ever, cranking out as much complicated, working code as ever.

And here's what I should have done:

  • Convince management to put the damned hardware floating point unit into the damned chip. It didn't cost that many square millimeters of silicon – I should have insisted on finding out how many. (FPUs were added in the next chip generation.)
  • Failing that, lay my hands on the chip simulator, measure the cost of floating point emulation, and use it wherever it was affordable. (This is what we ended up doing in many places.)
  • Tell my boss that maintaining two versions in sync like he wanted isn't going to work – they're going to diverge completely, so that no tool in hell will be able to partially merge them and run the result. (Of course this is exactly what happened.)

Why did this end up in many months of shit work instead of doing the right thing? Because I didn't know what's what, because I didn't think I could argue with my management, and because the work was challenging and interesting. It then promptly went down the toilet.

The hardest part of "managing" these 10x folks – people widely known as extremely productive – is actually convincing them to work on something. (The rest of managing them tends to be easy – they know what's what; once they decide to do something, it's done.)

You'd expect the opposite, kind of, right? I mean if you're so productive, why do you care? You work quickly; the worst thing that happens is, nothing comes out of it – then you'll just do the next thing quickly, right? I mean it's the slow, less productive folks that ought to be picky – they're slower and so get less shots at new stuff to work on to begin with, right?

But that's the optical illusion at work: the more productive folks aren't that much quicker – not 10x quicker. The reason they appear 10x quicker is that almost nothing they do is thrown away – unlike a whole lot of stuff that other people do.

And you don't count that thrown-away stuff as productivity. You think of a person as "the guy who did X" where X was famously useful – and forget all the Ys which weren't that useful, despite the effort and talent going into those Ys. Even if something else was "at fault", like the manager, or the timing, or whatever.

To pick famous examples, you remember Ken Thompson for C and Unix – but not for Plan 9, not really, and not for Go, not yet – on the contrary, Go gets your attention because it's a language by those Unix guys. You remember Linus Torvalds even though Linux is a Unix clone and git is a BitKeeper clone – in fact because they're clones of successful products which therefore had great chances to succeed due to good timing.

The first thing you care about is not how original something is or how hard it was to write or how good it is along any dimension: you care about its uses.

The 10x programmer will typically fight very hard to not work on something that is likely enough to not get used.

One of these wise guys asked me the other day about checkedthreads which I've just finished, "so is anyone using that?" with that trademark irony. I said I didn't know; there was a comment on HN saying that maybe someone will give it a try.

I mean it's a great thing; it's going to find all of your threading bugs, basically. But it's not a drop-in replacement for pthreads or such, you need to write the code using its interfaces – nice, simple interfaces, but not the ones you're already using. So there's a good chance few people will bother; whereas Helgrind or the thread sanitizer, which have tons of false negatives and false positives, at least work with the interfaces that people use today.

Why did I bother then? Because the first version took an afternoon to write (that was before I decided I want to have parallel nested loops and stuff), and I figured I had a chance because I'd blog about it (as I do, for example, right now). If I wrote a few posts explaining how you could actually hunt down bugs in old-school shared-memory parallel C code even easier than with Rust/Go/Erlang, maybe people would notice.

But there's already too much chances of a flop here for most of the 10x crowd I personally know to bother trying. Even though we use something like checkedthreads internally and it's a runaway success. In fact the ironic question came from the guy who put a lot of work in that internal version – because internally, it was very likely to be used.

See? Not working on potential flops – that's productivity.

How to pick what to work on? There are a lot of things one can look at:

  • Is there an alternative already available? How bad is it? If it's passable, then don't do it – it's hard to improve on a good thing and even harder to convince that improvements are worth the switch.
  • How "optional" is this thing? Will nothing work without it, or is it a bell/whistle type of thing that can easily go unnoticed?
  • How much work do users need to put in to get benefits? Does it work with their existing code or data? Do they need to learn new tricks or can they keep working as usual?
  • How many people must know about the thing for it to get distributed, let alone used? Will users mostly run the code unknowingly because it gets bundled together with code already distributed to them, or do they need to actively install something? (Getting the feature automatically and then having to learn things in order to use it is often better than having to install something and then working as usual. Think of how many people end up using a new Excel feature vs how many people use software running backups in the background.)
  • How much code to deliver how much value? Optimizing the hell out of a small kernel doing mpeg decompression sounds better than going over a million lines of code to get a 1.2x overall speed-up (even though the latter may be worth it by itself; it just necessarily requires 10x the programmers, not one "10x programmer").
  • Does it have teeth?If users do something wrong (or "wrong"), does it silently become useless to them (like static code analysis when it no longer understands a program), or does it halt their progress until they fix the error (like a bounds-checked array)?

You could easily expand this list; the basic underlying question is, what are the chances of me finishing this thing and then it being actually used? This applies recursively to every feature, sub-feature and line of code: does it contribute to the larger thing being used? And is there something else I could do with the time that would contribute more?

Of course it's more complicated than that; some useful things are held in higher regard than others for various reasons. Which is where Richard Stallman enters and requires us to call Linux "GNU/Linux" because GNU provided much of the original userspace stuff. And while I'm not going to call it "Gah-noo Lee-nux", there's sadly some merit to the argument, in the sense that yeah, unfortunately some hard, important work is less noticed than other hard, important work.

But how fair things are is beside the point. After all, it's not like 10x the perceived productivity is very likely to give you 10x the compensation. So there's not a whole lot of reasons to "cheat" and appear more productive than you are. The main reason to be productive is because there's fire raging up one's arse, more than any tangible benefit.

The point I do want to make is, to get more done, you don't need to succeed more quickly (although that helps) as much as you need to fail less often. And not all failures are due to lack of knowledge or skill; most of them are due to quitting before something is actually usable – or due to there being few chances for it to be used in the first place.

So I believe, having authored a lot of code that went down the toilet, that you don't get productive by working as much as by not working – not on stuff that is likely to get thrown away.

Efficiency is fundamentally at odds with elegance

Q: In retrospect, wasn't the decision to trade off programmer efficiency, security, and software reliability in exchange for runtime performance a fundamental mistake?

A: Well, I don’t think I made such a tradeoff. I want elegant and efficient code. Sometimes I get it. The efficiency vs. correctness, efficiency vs. programmer time, efficiency vs. high level, etc. dichotomies are largely bogus.

An interview with Bjarne Stroustrup

Unlimited-precision symbolic computation is more elegant than floating point numbers. You simply never have any numerical stability problems. Anything algebraically correct – a valid way to solve for x given the equations involving x – is also computationally correct. You don't need to know all the quirks of floating point, and you won't need "numerical recipes" which are basically ways to deal with these quirks.

Symbolic computation is rather widely available – say, in Mathematica, Matlab/maple, etc. – but it's not used nearly as much as floating point. That's because floating point is much more efficient, and a whole lot of things can not be done in a reasonable amount of time and space with symbolic computation.

It is undeniable that this efficiency comes at a cost of correctness (in terms of increased likelihood of bugs), programmer time, and elegance. There are plenty of algebraically elegant solutions which just don't work in floating point. If you don't notice, you have a bug; if you do notice, you spend your (programmer) time looking for an alternative, and said alternative may be less elegant.

Floating point is not the lowest level we can sink to. In many cases the quest for still more efficiency brings us to the dark quagmires of fixed point arithmetic. That's when you have an integer and you implicitly assume it's divided by 2^N – the exponent is statically known so the point isn't floating at run time, so to say. So to implement a+b you just use integer addition, and a*b is an integer multiplication followed by a right shift by N. Or there are midway scenarios where you have many integers with a common, dynamically computed exponent because they all have roughly the same range (FFT is one case where this is often done.)

Fixed point is so ugly that there's not even a recipes book that I'm aware of; it just doesn't come out tasty in the slightest. The biggest trouble isn't even the very likely overflow but the loss of precision: floating point guarantees a certain number of significant bits in the mantissa, while fixed point doesn't – unless you explicitly normalize the number at some point using CLZ or similar (making it more of a floating point emulation than "true" fixed point where exponents aren't represented at runtime). You think you have a 32-bit mantissa and an implicit exponent so the number is rather precise – but those 32 bits can have most of the high bits set to zero and then it's not precise at all.

However, a mixture of 8-bit, 16-bit and 32-bit fixed point operations often beats the performance of floating point by a large margin; especially if you have SIMD instructions, because you can always fit 4x more 8-bit numbers into a register than 32-bit numbers, and a single-precision floating point multiplier is ~10x more costly in hardware than an 8-bit integer multiplier so you have less of them.

Again, this efficiency comes at a cost of correctness (in terms of increased likelihood of bugs), programmer time, and elegance.

In computer vision, you're often looking for objects of a certain class, and you have a classifier taking a rectangular image region and telling whether this region contains an object of that class. A simple and elegant object detection algorithm is to apply this classifier to every possible rectangle in the image, and then remove rectangles which mostly overlap (as in, if there are 15 similar rectangles saying there's a face in roughly the same place, make one rectangle out of them all).

This elegant algorithm is never used, because there are too many possible rectangles (every coordinate times every size). A common optimization is to use a cascade of classifiers. That is, apply a very cheap classifier with a lot of false positives but hopefully almost no false negatives to every region. The purpose is to throw away most of the rectangles so that the remaining smaller set still contains all the true positives – and a lot of false positives, of course, but much less.

This is repeated with many (possibly increasingly expensive) classifiers processing increasingly smaller sets of rectangles. The most widely deployed classifier cascade is probably the Viola-Jones face detector, currently available in most digital cameras displaying little squares around faces. As you could have noticed, it often misses a face, which is to be expected with all the little classifiers hurrying to throw rectangles away. And which is OK for a consumer application where a success rate of 90-95% is perfectly fine and an extra 1% of detection rate is not worth a $0.01 increase in price. The point is that the error rate is undeniably increased by stricter efficiency requirements.

The upshot is that object detection provides a broad family of examples where, again, runtime efficiency comes at a cost of correctness (in terms of increased likelihood of bugs – there's much more code to write – as well as the ultimate detection rate), programmer time, and elegance.

(Even the smallish sub-problem of merging overlapping rectangles provides an example where efficiency has to be bought with all those other things including elegance. A short, readable, elegant solution could use an O(N^2) nested loop where each rectangle is intersected with every other rectangle. One optimization is some sort of spatial data structure where you don't look at rectangles if they don't fall into the same bin of the spatial subdivision because then they can't intersect. That's faster, more buggy and less readable.)

Does this have anything to do with the quote by Stroustrup though? His implied point was how the std::sort template is more elegant as well as more efficient than C's qsort function fiddling with void pointers, right? Or ostream vs printf? Whereas these are all examples of "algorithmic efficiency" – is that even related to language design?

Well, the thing is, "algorithms" and "languages/code" is a continuum:

Think of all the psychic energy expended in seeking a fundamental distinction between "algorithm" and "program". — Alan Perlis

Given that it's a continuum, it is doubtful that a statement which is profoundly wrong in an "algorithmic" context could be true in a "programming" context. If the tradeoff between runtime efficiency and programmer efficiency/"elegance" is fundamental from an algorithmic point of view, then it's likely fundamental in computing in general.

For a concrete example of how blurry the line between "algorithmic efficiency" and "code efficiency" is, let's discuss corner detection. The FAST corner detector is a decision tree looking at pixels surrounding the central pixel and comparing the image intensity of the center to its surroundings. Similarly to other classifier cascades, "not a corner" is a quick decision, while "yes, a corner" is decided after all the checks are done.

The decision tree is implemented in several thousands of auto-generated C code lines with gotos. (That's one addition to the recent discussion about the utility of gotos in systems programming; add computer vision to the list of goto applications, I guess.)

Is it possible to implement the decision tree in a more elegant and readable way? Of course – but at the cost of efficiency; not asymptotic efficiency since it'd be the same decision tree, but efficiency nonetheless.

Is this goto business an "algorithmic" optimization or a "program" optimization? Consider that FAST's entire raison d'etre is being faster than, say, the Harris corner detector. Constants matter for high-resolution images processed in real time.

Consider furthermore that both FAST and Harris are O(#pixels) since they look at a finite, small number of pixels around each coordinate and execute a finite, small number of operations. Consider that which is more efficient depends on the platform – SIMD helps speed up Harris but not FAST, and different SIMD instruction sets speed it up by very different factors. (This is also true for linear classifiers vs Viola Jones and for other cases.) And consider the fact that algorithmically, they're wildly different – Harris looks at eigenvectors whereas FAST is an intensity-based decision tree, they have tunable decision thresholds with different meaning, and different sets of false positives and false negatives.

So is FAST a work in the area of "algorithms" or "programming", and is the auto-generated mountain of code essential to make it efficient an "algorithm" or a "program"? My answer is that it's both, in the sense that you can't really draw the line.

But what about std::sort and C++'s combination of efficiency and elegance? Well, C++ rather obviously does pay with programmer efficiency for runtime efficiency, without an option to opt out of the deal. Every allocation can leak, every reference can be dangling, every buffer can overflow, etc. etc. etc.

This blindingly obvious fact doesn't surprise those who realize the fundamental tradeoff between efficiency and a whole lot of other things, some of which can be collectively called "elegance". Whereas those refusing to believe in such a tradeoff manage to not even notice the consequences. For example:

The relatively small size of the C++ standard library – primarily reflecting the lack of resources in the ISO C++ standards committee – compared with the huge corporate libraries can be a real disadvantage for C++ developers compared to users of proprietary languages.

…So why do languages without corporate backing which are 2 to 3 times younger than C++, such as Perl, Python, and Ruby, have so much more libraries, both standard and non-standard, but widely used?

The best uses of C++ involve deliberate design. You design classes to represent the notions of your application, you organize those classes into hierarchies, you express your algorithms precisely and abstractly (no, that “and” is not a mistake), you use libraries, you build libraries, you devise error handling and resource management strategies and express them in code. The result can be beautiful, efficient, maintainable, etc. However, it’s not just sitting down and writing a series of statements directly manipulating characters, integers, and floating point numbers.

The thing is that actually doing something useful involves a whole lot of "direct manipulations" of characters, integers and floating point numbers – and strings, arrays, hash tables, files, sockets, windows, matrices, etc. Languages which let you "just sit down and write the series of statements" give programmers the extra productivity which results in all those extra libraries getting written.

However, equally undeniably it does cost you runtime efficiency, because you pay an overhead for built-in resource management strategies such as garbage collection, built-in error detection strategies such bounds checking, and a whole lot of other things.

It's not surprising that Stroustrup sees the problem in the fact that corporations "with the resources" invest them in what he thinks is the wrong thing, presumably because of their self-interested profit motives. Alex Stepanov who designed the STL expressed similar statements, and so did Alan Kay and every other perfectionist technologist. If you seek perfection to the point of denying the existence of most obvious tradeoffs – and tradeoffs are a pesky thing for a perfectionist because they imply that perfection is unattainable – then you're also likely to somewhat resent corporations, markets, etc. For a discussion of that, see my take on Worse Is Better vs The Right Thing.

(Of course there are plenty of perfectionists who, instead of rationalizing C++'s productivity problems, spend their time denying that Python is slow, or keep waiting for Python to become fast. It will not become fast. Also, all its combinations with C/C++ designed to remedy this inefficiency will forever be ugly. We had psyco, PyPy, pyrex, Cython, Unladen Swallow, CPython extension modules, Boost.Python, and who knows what else. Python is not designed to be efficient; it's designed for productivity and for extensibility through a necessarily ugly C FFI. The tradeoff is fundamental. Python is slow forever. Python bindings are ugly forever.)

So if the tradeoff is fundamental, should we give up on efficient resource utilization? No – if the elegant thing is to load the database table into RAM, it doesn't mean that we have enough RAM. Should we give up on programmer productivity? No – inline assembly or lock-free code which isn't obviously bug-free doesn't belong in our cold paths.

We should, however, give up on perfection. Some code will be slower than we want because we don't have time to optimize it, and some code will be uglier than we want because we have no choice but to optimize it.

A hope to defeat a fundamental tradeoff is nothing but a source of frustration, and it's a bliss to have lost such a hope.

"Value", the irksome euphemism

An economic value is the worth of a good or service as determined by the market.

Wikipedia

You keep using that word. I do not think it means what you think it means.

Inigo Montoya

If people pay you $1, then the economic value of your good or service has been determined by the market to be $1.

"Creating value" is thus a euphemism for "getting people to pay you money" – which has nothing to do with the usual meaning of "value".

Why is "value" an irksome euphemism? Because heroin dealers "create value", as determined by the market.

In the context of my own profession, all of the following are examples of value creation:

  • An office suite using undocumented and constantly changing formats. Value to users having no other way to access their own and others' documents: $100-$500, depending.
  • A distribution channel allowing developers to deploy software on popular devices, for which no legal alternative exists. Value to developers: 30% of revenue.
  • A social network with a billion private and corporate users who signed up for free, with a new charge to reach a given share of one's audience. Value created: $200 per post.

The more money I've extracted from you, the more value I've created, haven't I?

I'm not picking on Microsoft, Apple or Facebook. I can imagine working for any of them. My conscience is as flexible as the next guy's.

(A particularly inflexible conscience is a horrible condition. Feet to which no mass-produced shoes fit are merely inconvenient. A conscience incompatible with mass-produced social arrangements is a huge burden – not just on its owner, but on his friends and family.)

All I'm saying is that goods and services are distinct from bads and disservices, though both "create value".

Moreover, some sort of disservice tends to be essential to "value creation", a.k.a the extraction of money. People are attached to their money, and will only part with it when given little choice. Microsoft, Apple and Facebook constantly hone their methods of limiting users' choice. Who doesn't?

Business is what it is. It's not that consumers (us) are any better than producers (us). Nor is it impossible for something "free" – as in speech, beer, rider, whatever – to be a disservice to its users.

I just don't think "value" is the right word.

Do you really want to be making this much money when you're 50?

"Do You Really Want to be Doing This When You're 50?"

Well, I didn't really want to be doing this when I was 20. I'm in it for the money. As long as there's money in programming, I'll stay for the money, in all likelihood.

What else do you want to be doing when you're 50? Give me a profession remotely close to programming in the following ways:

  • Little or no required education
  • Good compensation, even for mediocre performers
  • Millions of jobs
  • No physical effort
  • No health or legal risks

Programming is money for nothing. Programming is very easy to enter and extremely hard to quit. What would you do instead?

I work with three lawyers – two became programmers and one became a PM. I haven't met programmers who became lawyers. I do know an engineer – not a programmer – who became a patent attorney (reported reason: "at some point, you resent your manager being the age of your kids"). Would you like to become a patent attorney when you're 50?

I had a manager who decided he'd rather be a school teacher, thinking that this line of work is more beneficial to society. He quit after 8 months, saying in his parting interview to a mainstream newspaper: "Sometimes I just want to enter the classroom with a machine gun and open fire". He's with Samsung now; he feels that his contribution to smartphone imagers benefits society substantially enough.

One of my roommates at work has been studying a bunch of things for a while now. He's got a degree in psychology and in something called Visual Theater. He's been programming part-time all the while, which is how he financed his studies. He's programming as a part of his visual performances (there's computer music involved). He'll likely be programming to finance his art work. I'm not sure he plans to quit programming at any defined point.

I've seen a lot of people "quitting" to study anything from physics to philosophy, and then going back to programming. The money is addictive. There are many other sources of satisfaction, of course – which is why I run this blog for free – but much of this satisfaction has to do with demand, directly or indirectly, and is thus very much related to money. "Building something useful" and "making money" are close relatives.

You could, of course, become independently wealthy. But you probably won't, and then programming is your plan B. There's also a thing about material wealth – it's easily taken away. I'm from Soviet Russia, so I tend to exaggerate the likelihood of that – but really, property is easily confiscated, and paper money can become paper overnight. It's not just a USSR thing; the US confiscated gold from its citizens at about the same time as the USSR. Professional ability, however, can't be confiscated. The prudent (paranoid?) independently wealthy programmer will thus make some effort to stay in a good shape.

There's the argument that professional programming is stressful. Again – compared to what? A doctor's work? A lawyer's work? Answering calls by irate customers while your responses are recorded for later inspection?

What stress? Programmers who can program at all – as in, print out a binary tree correctly – are very scarce. This scarcity makes it rather hard to push programmers around. You can try to bully them into doing unpaid overtime, but they quickly learn that it's a seller's market, and that you're basically bluffing. You have nobody to replace them with.

With demand outstripping supply, there's enough space in programming for everyone. This makes for a not-so-competitive environment, compared to, say, finance/investment banking type of jobs. Programmers are also typically shielded from customers and senior management – the kind of people who're always right, a trait making communication somewhat tiresome.

Deadlines? Sure, we have them, just like everybody else. Let's admit it though – we tend to miss them, and it's not very stressful to us unless we want it to be. If you're given an impossible schedule, and you do your best, and you miss the deadline, you can suffer deeply or you can maintain mental peace. The fact is that your material well-being is rarely in jeopardy because of a missed deadline, so your reaction is fully up to you.

There's the argument that programmers can't fully understand what's going on, what with all the APIs and layers and stuff. And if you don't understand your own environment, that's stressful and that's not fun. Fair enough; but again – who does understand his environment more than a programmer? A doctor digging into a patient's guts? A lawyer sifting through legal documents? An investor trading financial derivatives? A manager overseeing 10 or 20 programmers? With all the self-inflicted complexity, we're still in a better shape than most.

The fact is that there are relatively few programmers in their fifties around. Does it mean people don't survive in programming though? More likely, it is simply a result of growth. There were few 20 year old programmers 30 years ago – compared to 10 years ago. Therefore, there are fewer 50 year old programmers today than 30 year old programmers. To the extent that the growth in programming slows down, things will be different 20 years down the road.

So I'm not planning to quit programming, not because it's such a great source of joy by itself, but because it looks so good compared to just about anything else. Maybe not the most "passionate" statement – but passion burns out, whereas greed is sustainable. And if you plan to quit programming, I wonder what your alternative is, and I won't be surprised if you come back to programming in a few years.

What "Worse is Better vs The Right Thing" is really about

I thought about this one for a couple of years, then wrote it up, and left it untouched for another couple of years.

What prompted me to publish it now – at least the first, relatively finished part – is Steve Yegge's post, an analogy between the "liberals vs conservatives" debate in politics and some dichotomies in the professional worldviews of software developers. The core of his analogy is risk aversion: conservatives are more risk averse than liberals, both in politics and in software.

I want to draw a similar type of analogy, but from a somewhat different angle. My angle is, in politics, one thing that people view rather differently is the role of markets and competition. Some view them as mostly good and others as mostly evil. This is loosely aligned with the "right" and the "left" (with the caveat that the political right and left are very overloaded terms).

So what does this have to do with software? I will try to show that the disagreement about markets is at the core of the conflict presented in the classic essay, The Rise of Worse is Better. The essay presents two opposing design styles: Worse Is Better and The Right Thing.

I'll claim that the view of economic evolution is what underlies the Worse Is Better vs The Right Thing opposition – and not the trade-off between design simplicity and other considerations as the essay states.

So the essay says one thing, and I'll show you it really says something else. Seriously, I will.

And then I'll tell you why it's important to me, and why – in Yegge's words – "this conceptual framework became one of the most important tools in my toolkit" (though of course each of us is talking about his own analogy).

Specifically, I came to think that you can be for evolution or against it, and I'm naturally inclined to be against it, and once I got that, I've been trying hard to not overdo it.

***

Much of the work on technology is done in a market context. I mean "market" in a relatively broad sense – not just proprietary for-profit developments, but situations of competition. Programs compete for users, specs compete for implementers, etc.

Markets and competition have a way to evoke strong and polar opinions in people. The technology market and technical people are no exception, including the most famous and highly regarded people. Here's what Linus Torvalds has to say about competition:

Don't underestimate the power of survival of the fittest. And don't ever make the mistake that you can design something better than what you get from ruthless massively parallel trial-and-error with a feedback cycle. That's giving your intelligence much too much credit.

And here's what Alan Kay has to say:

…if there’s a big idea and you have deadlines and you have expedience and you have competitors, very likely what you’ll do is take a low-pass filter on that idea and implement one part of it and miss what has to be done next. This happens over and over again.

Linus Torvalds thus views competition as a source of progress more important than anyone's ability to come up with bright ideas. Alan Kay, on the contrary, perceives market constraints as a stumbling block insurmountable for the brightest idea.

(The fact that Linux is vastly more successful than Smalltalk in "the market", whatever market one considers, is thus fully aligned with the creators' values.)

Incidentally, Linux was derived from Unix, and Smalltalk was greatly influenced by Lisp. At one point, Lisp and Unix – the cultures and the actual software – clashed in a battle for survival. The battle apparently followed a somewhat one-sided, Bambi meets Godzilla scenario: cheap Unix boxes quickly replaced sophisticated Lisp-based workstations, which became collectible items.

The aftermath is bitterly documented in The UNIX-HATERS Handbook, groundbreaking in its invention of satirical technical writing as a genre. The book's take on the role of evolution under market constraints is similar to Alan Kay's and the opposite of Linus Torvalds':

Literature avers that Unix succeeded because of its technical superiority. This is not true. Unix was evolutionarily superior to its competitors, but not technically superior. Unix became a commercial success because it was a virus. Its sole evolutionary advantage was its small size, simple design, and resulting portability.

The "Unix Haters" see evolutionary superiority as very different from technical superiority – and unlikely to coincide with it. The authors' disdain for the products of evolution isn't limited to development driven by economic factors, but extends to natural selection:

Once the human genome is fully mapped, we may discover that only a few percent of it actually describes functioning humans; the rest describes orangutans, new mutants, televangelists, and used computer sellers.

Contrast that to Linus' admiration of the human genome:

we humans have never been able to replicate  something more complicated than what we ourselves are, yet natural selection did it without even thinking.

The UNIX-HATERS Handbook presents in an appendix Richard P. Gabriel's famous essay, The Rise of Worse Is Better. The essay presents what it calls two opposing software philosophies. It gives them names – The Right Thing for the philosophy underlying Lisp, and Worse Is Better for the one behind Unix – names I believe to be perfectly fitting.

The essay also attempts to capture the key characteristics of these philosophies – but in my opinion, it focuses on non-inherent embodiments of these philosophies rather than their core. The essay claims it's about the degree of importance that different designers assign to simplicity. I claim that it's ultimately not about simplicity at all.

I thus claim that the essay discusses real things and gives them the right names, but the wrong definitions – a claim somewhat hard to defend. Here's my attempt to defend it.

Worse is Better – because it's simpler?

Richard Gabriel defines "Worse Is Better" as a design style focused on simplicity, at the expense of completeness, consistency and even correctness. "The Right Thing" is outlined as the exact opposite: completeness, consistency and correctness all trump simplicity.

First, "is it real"? Does a conflict between two philosophies really exist – and not just a conflict between Lisp and Unix? I think it does exist – that's why the essay strikes a chord with people who don't care much about Lisp or Unix. For example, Jeff Atwood

…was blown away by The Rise of "Worse is Better", because it touches on a theme I've noticed emerging in my blog entries: rejection of complexity, even when complexity is the more theoretically correct approach.

This comment acknowledges the conflict is real outside the original context. It also defines it as a conflict between simplicity and complexity, similarly to the essay's definition – and contrary to my claim that "it's not about simplicity".

But then examples are given, examples of "winners" at the Worse Is Better side – and suddenly x86 shows up:

The x86 architecture that you're probably reading this webpage on is widely regarded as total piece of crap. And it is. But it's a piece of crap honed to an incredibly sharp edge.

x86 implementations starting with the out-of-order implementations from the 90s are indeed "honed to an incredibly sharp edge". But x86 is never criticized because of its simplicity – quite the contrary, it's criticized precisely because an efficient implementation can not be simple. This is why the multi-billion-dollar "honing" is necessary in the first place.

Is x86 an example of simplicity? No.

Is it a winner at the Worse is Better side? A winner – definitely. At the "Worse is Better" side – yes, I think I can show that.

But not if Worse Is Better is understood as "simplicity trumps everything", as the original essay frames it.

Worse is Better – because it's more compatible?

Unlike Unix and C, the original examples of "Worse Is Better", x86 is not easy to implement efficiently – it is its competitors, RISC and VLIW, that are easy to implement efficiently.

But despite that, we feel that x86 is "just like Unix". Not because it's simple, but because it's the winner despite being the worse competitor. Because the cleaner RISC and VLIW ought to be The Right Thing in this one.

And because x86 is winning by betting on evolutionary pressures.

Bob Colwell, Pentium's chief architect, was a design engineer at Multiflow – an early VLIW company which was failing, prompting him to join Intel to create their out-of-order x86 implementation, P6. In The Pentium Chronicles, he gives simplicity two thumbs up, acknowledges complexity as a disadvantage of x86 – and then explains why he bet on it anyway:

Throughout the 1980s, the RISC/CISC debate was boiling. RISC's general premise was that computer instruction sets … had become increasingly complicated and counterproductively large and arcane. In engineering, all other things being equal, simpler is always better, and sometimes much better.

…Some of my engineering friends thought I was either masochistic or irrational. Having just swum ashore from the sinking Multiflow ship, I immediately signed on to a "doomed" x86 design project. In their eyes, no matter how clever my design team was, we were inevitably going to be swept aside by superior technology. But … we could, in fact, import nearly all of RISC's technical advantages to a CISC design. The rest we could overcome with extra engineering, a somewhat larger die size, and the sheer economics of large product shipment volume. Although larger die sizes … imply higher production cost and higher power dissipation, in the early 1990s … easy cooling solutions were adequate. And although production costs were a factor of die size, they were much, much more dependent on volume being shipped, and in that arena, CISCs had an enormous advantage over their RISC challengers.

…because of having more users ready to buy them to run their existing software faster.

x86 is worse - as it's quite clear now when, in cell phones and tablets, easy cooling solutions are not adequate, and the RISC processor ARM wins big. But in the 1990s, because of compatibility issues, x86 was better.

Worse is Better, even if it isn't simpler – when The Right Thing is right technically, but not economically.

Worse is Better – because it's quicker?

Interestingly, Jamie Zawinski, who first spread the Worse is Better essay, followed a path somewhat similar to Colwell's. He "swum ashore" from Richard Gabriel's Lucid Inc., where he worked on what would become XEmacs, to join Netscape (named Mosiac at the time) and develop their very successful web browser. Here's what he said about the situation at Mosaic:

We were so focused on deadline it was like religion. We were shipping a finished product in six months or we were going to die trying. …we looked around the rest of the world and decided, if we're not done in six months, someone's going to beat us to it so we're going to be done in six months.

They didn't have to bootstrap the program on a small machine as in the Unix case. They didn't have to be compatible with an all-too-complicated previous version as in the x86 case. But they had to do it fast.

Yet another kind of economic constraint meaning that something else has to give. "We stripped features, definitely". And the resulting code was, according to jwz – not simple, but, plainly, not very good:

It's not so much that I was proud of the code; just that it was done. In a lot of ways the code wasn't very good because it was done very fast. But it got the job done. We shipped – that was the bottom line.

Worse code is Better than not shipping on time – Worse is Better in its plainest form. And nothing about simplicity.

Here's what jwz says about the Worse is Better essay – and, like Jeff Atwood, he gives a summary that doesn't summarize the actual text – but summarizes "what he feels it should have been":

…you should read it. It explains why mediocrity has better survival characteristics than perfection…

The essay doesn't explain that – the essay's text explains why simple-but-wrong has better survival characteristics than right-but-complex.

But as evidenced by jwz's and Atwood's comments, people want it to explain something else – something about perfection (The Right Thing) versus less than perfection (Worse is Better).

Worse is Better evolutionary

And it seems that invariably, what forces you to be less than perfection, what elects worse-than-perfect solutions, what "thinks" they're better, is economic, evolutionary constraints.

Economic constraints is what may happen to select for simplicity (Unix), compatibility (x86), development speed (Netscape) – or any other quality that might result in an otherwise worse product.

Just like Alan Kay said – but contrary to the belief of Linus Torvalds, the belief that ultimately, the result of evolution is actually better than anything that could have been achieved through design without the feedback of evolutionary pressure.

From this viewpoint, Worse Is Better ends up actually better than any real alternative – whereas from Alan Kay's viewpoint, Worse Is Better is actually worse than what's achievable.

(A bit convoluted, not? In fact, Richard Gabriel wrote several follow-ups, not being able to decide if Worse Is Better was actually better, or actually worse. I'm not trying to help decide that – just to show what makes one think it's actually better or worse.)

***

That's the first part – I hope to have shown that your view of evolution has a great effect on your design style.

If evolution is in the center of your worldview, if you think about viability as more important than perfection in any area, then you'll tend to design in a Worse Is Better style.

If you think of evolutionary pressure as an obstacle, an ultimately unimportant, harmful distraction on the road to perfection, then you'll prefer designs in The Right Thing style.

But why do people have a different view of evolution in the first place? Is there some more basic assumption underlying this difference? I think I have more to say about this, though it's not in nearly as finished form as the first part, and I might write about it all in the future.

Meanwhile, I want to conclude this first part with some thoughts on why it all matters personally to me.

I'm a perfectionist, by nature, and compromise is hard for me. Like many developers good enough to be able to implement much of their own ambitious ideas, I turned my professional life into a struggle for perfection. I wasn't completely devoid of common sense, but I did things that make me shiver today.

I wrote heuristic C++ parsers. I did 96 bit integer arithmetic in assembly. I implemented some perverted form of thread migration on the bare metal, without any kind of OS or thread support. I did many other things that I'm too ashamed to admit.

None of it was really needed, not if you asked me today. It was "needed" in the sense of being a step towards a too-good-for-my-own-good, "perfect" solution. Today I'd realize that this type of perfection is not viable anyway (in fact, none of these monstrosities survived in the long run.) I'd choose a completely different path that wouldn't require any such complications in the first place.

But my stuff shipped. I was able to make it work.You don't learn until you fail – at least I didn't. Perfectionists are stubborn.

Then at one point I failed. I had to throw out months worth of code, having realized that it's not going to fly.

And it so happened that I was reading Unix-Haters, and I was loving it, because I'm precisely the type of perfectionist that these people are, or close enough to identify with them. And there was this essay there about Worse Is Better vs The Right Thing.

And I was reading it when I wrote the code soon to be thrown out, and I was reading it when I decided to throw it out and afterwards.

And I suddenly started thinking, "This is not going to work, this type of thing. With this attitude, if you want it all, consistency, completeness, correctness – you'll get nothing, because you will fail, completely. You're too dumb, I mean I am, also not enough time. You have to choose, you're not going to get it all so you better decide what you want the most and aim at that."

If you read the Unix-Haters, you'll notice a lot of moral outrage – perfectionists have that, moral outrage at something imperfect. Especially at someone who knowingly chooses to aim at less than perfection. Especially if it's due to the ulterior motive of wanting to succeed.

And I felt a counter-outrage, for the first time. "What do you got to show, you got nothing. What good are your ideals if you end up dead? Dead bodies smell bad to us for a reason. Technical superiority without evolutionary superiority? Evolutionary inferiority means "dead". How can "dead" be technically superior? What have the dead ever done for us?"

It was huge, for me. I mean, it took a few years to truly sink in, but that was the start. I've never done anything Right since. And I've been professionally happy ever after. I guess it's a kind of "having swum ashore".

Work on unimportant problems

"Work on important problems": ~40900 results.

"Work on unimportant problems": ~18 results.

– Google (at the time of writing), tempting the contrarian in me

It seems obvious that some problems are important to solve and some aren't, as in, curing cancer is more important than delivering social gaming. Often, people lament the abundance of tech firms working on ultimately unimportant stuff, and advise to work on important problems and not just chase the money.

I guess I agree that some problems are ultimately more important than others. But I don't think it follows that working on the important ones is better.

Working on unimportant problems can create important side-effects. A whole lot of mission-critical, world-changing and even life-saving tech is a by-product of "unimportant" things – time-wasting infotainment products, or personal pet projects started without a grand noble cause.

For instance, GPU hardware was developed to run first-person shooters with increasingly fancier graphics. Today, it powers some of the largest high-performance computing clusters where "important" science is done.

Other types of processors powering HPC clusters weren't designed for HPC, either. Hardware originally designed for scientific computing is dead – Cray is the iconic example – and replaced by cheaper and more powerful microprocessors designed to run things like office software. Office software arguably solves no important problem: as Berglas convincingly argues, office automation results not in increased productivity, but in increased complexity of rules and regulations.

All popular programming languages and operating systems, without a single exception I can think of, began either as personal projects or commercial projects not aiming to solve any problem "important" by itself. People hacked on the stuff for pleasure (C, Unix, Linux, Python, Ruby, PHP), or to conquer the world of businessy/officy/enterprisey software (Windows, VB, Java, C#, ASP). One language more specifically designed for the implementation of important software is Ada – but most important programs are written in something else.

It can even seem that not much important computer hardware or software came out of institutions directly dealing with important problems. Much of the Internet protocols is one big exception, and arguably so is HTML – but probably not JavaScript.

And, certainly, it's the "unimportant" social companies that made publishing and coordination via Internet universally accessible. Myself, I'm not very fond of either Facebook, Twitter, etc. or the kind of political activity that's coordinated through these sites nowadays, but it's "important", without doubt – another important side-effect of unimportant time-wasting projects.

One might wonder how anything of importance can possibly come out of, say, FarmVille. I really don't know – however, I couldn't guess how anything of importance could come out of DOOM, and it did.

And then there's a reason why so much of the best tech comes out of the least "important" markets. These markets are big, and they're free. Important problems tend to imply a smallish scale, or heavy regulation, or both. So you can't finance the work, and/or can't get any work done anyway.

Consider the aerospace software market – there aren't many planes, but a whole lot of regulation. Philip Greenspun, a software entrepreneur, a flight instructor and an expert witness in both software-related and aviation-related lawsuits, had this to say about the Colgan 3407 disaster:

Who crashed Colgan 3407? Actually the autopilot did. … The airplane had all of the information necessary to prevent this crash. The airspeed was available in digital form. The power setting was available in digital form. The status of the landing gear was available in digital form. …

How come the autopilot software on this $27 million airplane wasn’t smart enough to fly basically sensible attitudes and airspeeds? Partly because FAA certification requirements make it prohibitively expensive to develop software or electronics that go into certified aircraft. It can literally cost $1 million to make a minor change. Sometimes the government protecting us from small risks exposes us to much bigger ones.

The same is happening in the automotive market, the healthcare market, etc. There's progress, of course, just nowhere near the progress in more frivolous areas – and much of the progress in "important" areas is a byproduct of progress in frivolous areas. As in, the best system for managing patients' records may well be Google Docs that doctors access from their iPads.

By the way, the importance of an issue correlates with the stupidity of rules, not just in technology, but in most things in life. The hoops you must jump through to get an "important" product out the door are not fundamentally different from airport security checks.

The airport security theater results in little added security. Likewise, the quality theater necessarily surrounding any life-saving technology results in little added quality. However, for much the same reasons, both are unavoidable. I've been working on automotive accident prevention systems for the last decade, and as time goes by, the regularly scheduled cavity searches are only getting worse.

So if you ask me – by all means, work on unimportant problems. They're often more fun to work on, and ultimately you never know how important they really are.

Email is evil

Personally, I love email:

  • It's still the best way to talk online, overall – the most open format, the best client programs.
  • Online beats offline since everything is archived and searchable.
  • Written beats spoken since you have time to think stuff through, and you can attach images, spreadsheets, code, etc.

However, I noticed that email discussions bring the worst out of people, whereas walking over to them and talking brings the best out of them. I guess it's because emails feel impersonal, leading to "email rage" much like feeling isolated inside a car leads to "road rage".

On top of that, for many people email is their todo list, there still really being no better alternative for keeping a todo list. What this means though is that sending an email with a suggestion implying work on their part without prior face-to-face discussion looks like a written order to do something. I believe this impression can't be avoided even with the most polite, "pretty please"-infested wording. It still feels like "you didn't even bother to talk to me and you expect me to do things!"

So I decided, roughly, to never open any discussion over email. It's fine for followups and bug reports, and it's fine if it's known to work for the people involved. But my default assumption is that email is an evil thing capable of creating tensions and conflicts out of nowhere. Much better to call the person, check that they're available to talk and go talk to them. Then, maybe, send them the summary over email to get all that archiving and searching goodness without the evil price.

Why programming isn't for everyone

Today I learned about HyperCard, a system where you could implement a basic calculator in a few easy steps, one of them involving the following impressively English-like snippet:

on mouseUp
  get name of me
  put the value of the last word of it after card field "lcd"
end mouseUp

The article depicts HyperCard as a system making programming accessible to people who aren't professional developers. It is claimed that Apple likely killed off the product because it's inconsistent with its business model (roughly, devices bought to consume rather than to create).

I sympathize with the sentiment – I very much like stuff you can tinker with, and dislike business models discouraging tinkering. However, I don't think businesses have the power to prevent anything that works well for many people from happening. A conspiracy of typewriter manufacturers could never stop the PC.

This seems especially true with software, where huge systems can be built by volunteers in their spare time. If an idea works, if a software system wants to be built around it, it will be built.

Of course it may be the case that the time hasn't come for a programming system for non-developers. It's just my opinion that it never will come, not really. Why?

Not because you need much education to program. Very useful stuff can be built without knowing why optimal sorting is O(n*log(n)), or even what big O means.

Not because programming languages must have, or typically have arcane syntax. As a kid, I found Pascal's somewhat English-like "begin" and "end" off-putting, and was greatly relieved to discover Algolish braces. How close to natural language syntax can get, and whether it is at all beneficial to go there is IMO an irrelevant question. The fact is that programming languages can be very readable to people.

The main reason is that development leads to maintenance, and maintenance leads to suffering.

For example, if your program stores persistent data, and you want to change it, your changes to the program must be done such as to preserve the meaning of existing data. This part of development causes major pain everywhere, from video recording to financial databases to compiler construction. No amount of knowledge and no amount of support from the tools make this fun.

There are many other things. Everything in your program's environment is unstable and you must constantly update the program to keep up. Your program gets cluttered with options and you forget what does what. There are cases you didn't test – spaces in the names, empty data fields, reverse order of operations.

As a result, maintenance means dealing with misbehaving programs that eat data, send trash around, or simply make you wait for an hour and then watch them produce garbage.

This never ends and quickly stops being fun. When something useful can not be done quickly and isn't the average person's idea of fun, it becomes the business of professionals – or hardcore hobbyists indistinguishable from professionals. As a counter-example, many people like cooking in their spare time without necessarily getting close to the level of a chef or spending that much time cooking. Con Kolivas, on the other hand, could technically be called a "hobbyist", but he could be called a "professional" as well.

Maybe I'm wrong, maybe there are plenty of places where a sprinkle of logic – in textual form or graphical form or whatever form – can be figured out quickly, left alone and be useful ever after. It's just that it's usually the opposite with me. Every time I have a nice little idea it takes me 10x the time it "should" take to implement, and most things keep biting me once in a while for a long time.

Programming isn't for everyone because it is not fun to maintain what was fun to program.

Compensation, rationality and the project/person fit

To negotiate a compensation, you need to compare to something. There are two principally different things people compare compensation to:

  1. Available alternatives. Employee: "I could get twice as much at Microsoft." Employer: "We can hire Bob for a half of your salary."
  2. Peers' compensation. Employee: "Jeff gets twice as much and he's not better than me." Employer: "John gets half your salary and you're not better than him."

I believe the second approach – comparing "ability" and having a common level of compensation for people "at the same level of ability" – is the worse approach. Its main drawbacks are:

  • People look into each other's pockets too much that way
  • It is, in a basic economical sense, an "irrational" approach
  • It ignores the project/person fit

I'll discuss all of these drawbacks, mostly focusing on ignoring the project/person fit – in my opinion, the worst part.

Looking into each other's pockets

Even if management doesn't disclose the way people are labeled and what compensation corresponds to each label, people have an incentive to find out all about this. This means that everyone will know how much everyone else gets, and how one must be labeled to earn a given amount.

People looking into each other's pockets is bad for everyone:

  • Invariably people will find others' compensation unjust, which doesn't improve team spirit.
  • You get Piterian situations where, say, a strong developer's only way to get a raise is to become a manager, at which he might very well suck, etc.
  • Sometimes the employer does want to set exceptional conditions for someone – pay someone significantly more or less than someone else with the same title. However, if everyone tends to find out about everyone else's compensation, it becomes hard to make these exceptions as it is guaranteed to upset people.

Others' compensation is one of those things that are better left unknown. It's a pity if you tempt people to find it out.

Comparing to imaginary alternatives is "irrational"

If I'm working on X, Jeff works on Y and John works on Z, it makes no sense to compare my compensation to theirs. Whoever is unhappy with the current arrangements and threatens to terminate them – that is, whether I quit or get fired – neither Jeff nor John will replace me, nor will I replace them.

Jeff and John usually have to keep working on Y and Z, so they can't work on X if I quit. Nor will I work on Y and Z – even if I quit, not the company, but just my team, and join their team in the same company. They're already there working on Y and Z – so I won't work on Y or Z, but on W.

Therefore, the employer should compare my compensation to what he'd pay someone else to do X, including the cost of training him. I should compare to what I'd be payed to do W, including the cost of having to learn to do it.

Why should we compare to these things and not others? Because these are our actual alternatives. Jeff's and John's compensation has nothing to do with our actual alternatives.

To which someone can legitimately still reply: why? Someone can say, I still want to compare to Jeff's and John's compensation. So what if you're saying that it's "economically irrational" to consider things unrelated to the real alternatives in a price negotiation? It's my price negotiation, I can compare to whatever I want!

That someone would be right, in a way. It's not like there's a monopoly on the definition of "economic rationality" – one could certainly find an economist claiming that looking at your peers is the rational thing to do, or at least the natural thing to do.

Say, Robert Frank – "Choosing the Right Pond". You know, evolutionary considerations – you're trying to impress a potential mate with your salary, the mate compares within the "pond", an unusually high salary is an externality, etc.

Basically it's partners that you compete for, and it's your peers who you compete against, so it's their compensation that you should care about. (Does this sound just like your workplace? I hope not…)

As an aside, I don't understand evolutionary definitions of "rationality", not really. I mean, if the ultimate goal is to pass your genes, shouldn't you become a serial rapist targeting nuns or someone else who isn't likely to use abortion? If you aren't doing this, and you advocate the evolutionary view of rationality, aren't you proving your own irrationality by your own actions? And if you are irrational, then why should irrational people like you be trusted to define rationality in the first place?

But the fact that I don't like the "evolutionary" view of rationality and prefer, in this context, the "classical economics" definition is just my opinion. An employer can have his own – just like a friend who kept trying to sell his car, for a long time, until he found someone willing to pay the high price.

Another friend said, when they discussed markets, "what you did is irrational – markets don't behave that way – in a market, you lower the price if you don't have a buyer". To which the seller responded – "first, I did sell high eventually; second – you can't tell me how markets behave – I am the market!"

So yeah, if you're an employer or an employee and you want to compare compensations regardless of what alternatives are actually available – you can of course do this. You are the market – economists, bloggers or anyone else can try to describe your behavior and predict its outcomes, but they aren't entitled to label it "rational" or "irrational", not really.

All that can be said is that considering imaginary alternatives instead of the real ones can very well make you face the real ones.

That is, suppose you say to an employee, "John gets half your salary and you're not better than him." Suppose the employee replies, "I could get twice as much at Microsoft." His alternative is real – he quits. Your alternative is not real – John is not available to replace the guy who quit. Now you're facing your real alternatives – which can be much worse than raising the guy's salary would have been.

Isn't it a better idea to consider your real alternatives during the negotiations?

To which one could reply – how bad those alternatives can be, really? I mean, we hired John, right? And he's just as good. So we can always hire this sort of person for this sort of price, right? Yeah, there are the training costs, but that's all there is to it, not?

I believe that there's more to it than training costs. The big thing is the project/person fit.

The project/person fit

It's magical. If a person wants to do something, I'm so much in favor of letting them, whatever other things they'd have to stop doing. I mean, there are things which nobody will ever do except the one person – or maybe one of two or three people – to whom it's important.

Or someone could do it, but not nearly as well. And not because he's "worse" – he may be "better" on all the common benchmarks (IQ, grades, reputation, whatever). He's not "worse" in any quantifiable way, but it just doesn't click – the project is not a good fit for him.

It's a depressing thought for a manager – a part of a manager's helplessness. A manager can't do anything himself – the most helpless creature around. He's always responsible for what other people do. He can pick the people, talk to people, negotiate with people, reshuffle people. But that is all he can do – and not a single bit of real work that must be done to make his project succeed.

This means an extreme dependence on other people, which is stressful. The project/person fit makes this much worse. You're basically constrained to not move people away from projects when there's this magical click. They're irreplaceable, so you depend on them tremendously – not very comforting. So it's natural to argue that this magic business doesn't really exist – everyone is replaceable.

Now, I'm not saying that people actually "can't be replaced" – far from it. That thought would make me lose sleep as a team leader – and it would offend me as a programmer.

I mean, if our processors are "universal computing machines", then surely programmers ought to be universal as well, right? I much prefer to think of myself a "replaceable cog" – but a universal cog – than an irreplaceable part of the peculiar machinery of my current workplace, obviously useless outside it because of my extreme specialization.

So actually I'm at the other extreme on this one, most likely – I don't think very much of "relevant experience", and I'll be the first to say that a person new to something will cope with it very well, don't worry. Everyone is replaceable, because everyone can deal with everything.

For instance, in our recent round of work on hardware verification, we had a tough deadline, so there was a single hardware module that 5 programmers worked on. Of them, 3 had no experience in hardware verification at all, so they had to learn about hardware simulators and waveform viewers and stuff.

Normally, just one person would do that work, but it'd take longer and we couldn't afford the latency. We also had to swap people in and out to do other things, and they had to continue where the previous person left. And it worked, basically. So I think I'm very much at the other extreme – programmers are universal, and they'll deal.

What do I mean by this "project/person fit" then?

What I mean is that there's still a 10x productivity difference between a person struggling with this important stuff that you dumped on them but they kinda don't understand or care about very much, and a person who wants the thing done.

Actually it's more than 10x – you can't quantify it, it's qualitatively different. A bird doesn't just move faster than a snail. You can't express the difference between crawling and flying in a single number, even if your HR policy mandates this sort of quantification.

People have their own priorities

A manager classifies things as important and unimportant, and he might be tempted to think that somebody gives a damn about his view of these matters.

But they don't give a damn. They classify things as "stuff the manager wants" and "stuff that they want". Stuff that's only important to them because you said so crawls. Stuff that they feel is important and interesting flies.

Managers might think that work gets done because they want it done. It's true – but the best work gets done because people who do it want it done.

And people are amazing in the diversity of their tastes. Taste depends on many things – personal talents and interests, personal history that makes some problems closer to your heart than others, and so on – but no matter what the reasons are, the result is that tastes are wildly different.

Consider the following things, all among the stuff our team works on:

  • A distributed build & run server.
  • A debugger agent – porting gdb to custom hardware and OS.
  • A graphical editor for tagging objects in video clips.
  • A static memory manager built around C language extensions and a constraint solver.

I think all of them are important, and all of them are interesting. As a programmer, I'd work on any of them. I mean, does any of this sound like boring grunt work? Certainly they're all nicer than verifying a hardware module that you didn't specify under time pressure, at least if you ask me.

However, in my team, there's just one, two, sometimes ~1.5 people who actually want to work on each of these things. Moreover, most of them have an aversion to most other things on the list.

Now, if it was strictly necessary, any of them would work on any of these projects. And they'd do a good job even if they got the one they hated the most. But it'd be uninspired, and nobody could blame them.

How easy is it to find someone who'd love to do a project? I'll tell you – real damn hard. I mean, I'm a language geek; in my opinion, everyone wants to work on programming language extensions. And you know what? They don't. Not really. Most don't want at all. Then many like the idea, in principle. But not that kind of language, or not that kind of extensions. There's no spark in their eyes – until the right person shows up.

Similarly with the other things. You'd think that a person who likes the debugger agent would also like the distributed build server, not? I'd expect that, definitely – but she doesn't. And you can't make someone like something. Usually you can't even pay them to like it. They just won't.

Some projects are optional. With these, I will wait for years until the right person shows up. I feel guilty – people are asking for it, it'd be great if we had it, it would become an enabler for things now
unthinkable. But who am I kidding? Nobody wants to do this now, not really. Better wait until he shows up.

When he shows up, what do I say? I say, keep him. Really. Don't let the thing turn into a wasteland just because programmers (actually) are universal, replaceable cogs!

Some projects are not optional – you must do them no matter what. When there's no right person to do such a thing – watch years of work, tears and sweat produce a mountain of code dripping with hate and depression. I'm serious – sometimes I can actually look at code and see how nobody ever loved it.

I've seen brilliant people produce disgusting code nobody wants to touch. Certainly I couldn't help it myself – my sense of duty did not help. I did it on time, it worked, and it was a toxic waste.

It doesn't help that the manager thinks it's important. It doesn't help that I agree with him. If I don't like it, I won't do it very well.

Sometimes – many times – the right person arrives years after the wrong people – the wrong people for this project – have been spitting blood trying to make it work. It takes a few months and the scenery is transformed. Mountains of hate are gone. You have a working system. People who lost hope for this particular area to ever become habitable, to stop smelling of fail, suddenly smile.

Would you let that person go, just because John is "just as good" and you pay him less? There is no way that John is going to take over this thing. Even if he's available. He isn't interested. He couldn't care less. He could take over just like anyone else, but it'd be toxic waste all over again. Come on!

Sometimes a programmer will be moved away from a project – or not be allowed to do it – because of his already high compensation. "We can find someone cheaper to do this". Yes – but not someone who wants to do this! This just brings tears to my eyes.

But if he loves the project, he won't quit, right?

Good thinking. People who can be replaced with someone like John should therefore be compared to John. People who can't be replaced with someone like John can still be compared to him – they're the ones who love their work, so they likely won't quit, and then we can sensibly compare everyone to everyone in a reasonable manner.

They'll quit.

They'll quit even if it's "irrational" for them. People can quit a project they love over compensation, and then spend years until they find something nice to work on. Often they feel it wasn't worth it, or at least are unhappy with their working situation.

But it doesn't help that you were right and that they should have stayed, settling for the fair compensation level of John and working on their favorite stuff. It doesn't help because the loss is yours as well.

Why do people behave in this "irrational" way, apart from having too high expectations about their alternatives? The economist David Friedman gives an evolutionary explanation, if you like that sort of thing:

…human beings regard the usual terms of exchange as right and any deviation from those terms that makes them worse off as a presumptively wicked act by the other party. This feature resulted in human beings that possessed it getting better terms in bilateral monopoly bargains in the environment in which we evolved…

"Bilateral monopoly" is basically the situation you and your employee find yourselves in once a project "clicks" with him. It's hard for you to replace him – and it's hard for him to replace you. This may tempt you to lower the price you're willing to pay. The response Mother Nature had equipped us with for these cases is that the employee thinks you're wicked, and he quits.

This reaction is "irrational" – in the sense that he's now worse off. But it's very much "rational", in the sense that the threat of "irrational" quitting should improve his terms – if you know that the threat is real, despite the fact that actually quitting would make him worse off.

Well, in my experience, the threat is very real alright. Worth taking into account.

Why management likes to set standard compensation levels

I suspect the benefit is that it makes decision-making easier on the scale of a large company. It works reasonably well and is very easy to implement. It's a bit like using a simple heuristic in code because it's just 5 lines of code and it sort of works.

"Bounded rationality", if you like (…isn't "bounded rationality" what used to be called "stupidity"? Aren't "the cognitive limitations of the mind" mentioned in the article also called "stupidity"? I'm not mocking stupidity – I'm certainly equipped with a high degree of stupidity myself, and you can trace its influence on my decision-making. I'm just wondering why invent new terms when we already have perfectly good ones.)

Anyway, if you know why standard compensation levels are a good idea – a rational argument for them in an unbounded way – let me know in the comments. Puzzles me plenty.

Cycles, memory, fuel and parking

In high-performance, resource-constrained projects, you're not likely to suddenly run out of cycles – but you're quite likely to suddenly run out of memory. I think it's a bit similar to how it's easy to buy fuel for your car – but sometimes you just can't find a parking spot.

I think the difference comes from pricing. Processor cycles are priced similarly to fuel, whereas memory bytes are priced similarly to parking spots. I think I know the problem but not the solution – and will be glad to hear suggestions for a solution.

Cycles: gradual price adjustment

If you work on a processing-intensive project – rendering, simulation, machine learning – then, roughly, every time someone adds a feature, the program becomes a bit slower. Every slowdown makes the program a bit "worse" – marginally less useable and more annoying.

What this means is that every slowdown is frowned upon, and the slower the program becomes, the more a new slowdown is frowned upon. From "it got slower" to "it's annoyingly slow" to "we don't want to ship it this way" to "listen, we just can't ship it this way" – effectively, a developer slowing things down pays an increasingly high price. Not money, but a real price nonetheless – organizational pressure to optimize is proportionate to the amount of cycles spent.

Therefore, you can't "suddenly" run out of cycles - long before you really can't ship the program, there will be a growing pressure to optimize.

This is a bit similar to fuel prices – we can't "suddenly" run out of fuel. Rather, fuel prices will rise long before there'll actually be no fossil fuels left to dig out of the ground. (I'm not saying prices will rise "early enough to readjust", whatever "enough" means and whatever the options to "readjust" are –  just that prices will rise much earlier in absolute terms, at least 5-10 years earlier).

This also means that there can be no fuel shortages. When prices rise, less is purchased, but there's always (expensive) fuel waiting for those willing to pay the price. Similarly, when cycles become scarce, everyone spends more effort optimizing (pays a higher price), and some features become too costly to add (less is purchased) – but when you really need cycles, you can get them.

Memory: price jumps from zero to infinity

When there's enough memory, the cost of an allocated byte is zero. Nobody notices the memory footprint – roughly, RAM truly is RAM, the cost of memory access is the same no matter where objects are located and how much memory they occupy together. So who cares?

However, there comes a moment where the process won't fit into RAM anymore. If there's no swap space (embedded devices), the cost of allocated byte immediately jumps to infinity – the program won't run. Even if swapping is supported, once your working set doesn't fit into memory, things get very slow. So going past that limit is very costly – whereas getting near it costs nothing.

Since nobody cares about memory before you hit some arbitrary limit, this moment can be very sudden: without warning, suddenly you can't allocate anything.

This is a bit similar to a parking lot, where the first vehicle is as cheap to park as the next and the last – and then you can't park at all. Actually, it's even worse - memory is more similar to an unmarked parking lot, where people park any way they like, leaving much unused space. Then when a new car arrives, it can't be parked unless every other car is moved – but the drivers are not there.

(Actually, an unmarked parking lot is analogous to fragmented memory, and it's solved by heap compaction by introducing a runtime latency. But the biggest problem with real memory is that people allocate many big chunks where few small ones could be used, and probably would be used if memory cost was something above zero. Can you think of a real-world analogy for that?..)

Why not price memory the way we price cycles?

I'd very much like to find a way to price memory – both instructions and data - the way we naturally price cycles. It'd be nice to have organizational pressure mount proportionately to the amount of memory spent.

But I just don't quite see how to do it, except in environments where it happens naturally. For instance, on a server farm, larger memory footprint can mean that you need more servers – pressure naturally mounts to reduce the footprint. Not so on a dedicated PC or an embedded device.

Why isn't parking like fuel, for that matter? Why are there so many places where you'd expect to find huge underground parking lots – everybody wants to park there – but instead find parking shortages? Why doesn't the price of parking spots rise as spots become taken, at least where I live?

Well, basically, fuel is not parking – you can transport fuel but not parking spots, for example, so it's a different kind of competition – and then we treat them differently for whatever social reason. I'm not going to dwell on fuel vs parking – it's my analogy, not my subject. But, just as an example, it's perfectly possible to establish fuel price controls and get fuel shortages, and then fuel becomes like parking, in a bad way. Likewise, you could implement dynamic pricing of parking spots – more easily with today's technology than, say, 50 years ago.

Back to cycles vs memory – you could, in theory, "start worrying" long before you're out of memory, seeing that memory consumption increases. It's just not how worrying works, though. If you have 1G of memory, everybody knows that you can ship the program when it consumes 950M as easily as when it consumes 250M. Developers just shrug and move along. With speed, you genuinely start worrying when it starts dropping, because both you and the users notice the drop – even if the program is "still usable".

It's pretty hard to create "artificial worries". Maybe it's a cultural thing – maybe some organizations more easily believe in goals set by management than others. If a manager says, "reduce memory consumption", do you say "Yes, sir!" – or do you say, "Are you serious? We have 100M available – but features X, Y and Z are not implemented and users want them!"

Do you seriously fight to achieve nominal goals, or do you only care about the ultimate goals of the project? Does management reward you for achieving nominal goals, or does it ultimately only care about real goals?

If the organization believes in nominal goals, then it can actually start optimizing memory consumption long before it runs out of memory – but sincerely believing in nominal goals is dangerous. There's something very healthy in a culture skeptical about anything that sounds good but clearly isn't the most important and urgent thing to do. Without that skepticism, it's easy to get off track.

How would you go about creating a "memory-consumption-aware culture"? I can think of nothing except paying per byte saved - but, while it sounds like a good direction with parking spots, with developers it could become quite a perverse incentive…