There's this common notion of "10x programmers" who are 10x more productive than the average programmer. We can't quantify productivity so we don't know if it's true. But definitely, enough people appear unusually productive to sustain the "10x programmer" notion.
How do they do it?
People often assume that 10x more productivity results from 10x more aptitude or 10x more knowledge. I don't think so. Now I'm not saying aptitude and knowledge don't help. But what I've noticed over the years is that the number one factor is 10x more selectivity. The trick is toΒ consistentlyΒ avoid shit work.
And by shit work, I don't necessarily mean "intellectually unrewarding". Rather, the definition of shit work is that its output goes down the toilet.
I've done quite a lot of shit work myself, especially when I was inexperienced and gullible. (One of the big advantages of experience is that one becomes less gullible that way β which more than compensates for much of the school knowledge having faded from memory.)
Let me supply you with a textbook example of hard, stimulating, down-the-toilet-going work: my decade-old adventures with fixed point.
You know what "fixed point arithmetic" is? I'll tell you. It's when you work with integers and pretend they're fractions, by implicitly assuming that your integer x actually represents x/2^N for some value of N.
So to add two numbers, you just do x+y. To multiply, you need to do x*y>>N, because plain x*y would represent x*y/2^2N, right? You also need to be careful so that this shit doesn't overflow, deal with different Ns in the same expression, etc.
Now in the early noughties, I was porting software to an in-house chip which was under development. It wasn't supposed to have hardware floating point units β "we'll do everything in fixed point".
Here's a selection of things that I did:
- There was a half-assed C++ template class called InteliFixed<N> (there still is; I kid you not). I put a lot of effort into making it, erm, full-assed (what's the opposite of half-assed?) This included things like making operator+ commutative when it gets two fixed point numbers of different types (what's the type of the result?); making sure the dreadful inline assembly implementing 64-bit intermediate multiplications inlines well; etc. etc.
- My boss told me to keep two versions of the code β one using floating point, for the noble algorithm developers, and one using fixed point, for us grunt workers fiddling with production code. So I manually kept the two in sync.
- My boss also told me to think of a way to run some of the code in float, some not, to help find precision bugs. So I wrote a heuristic C++ parser that automatically merged the two versions into one. It took some functions from the "float" version and others from the "fixed" version, based on a header-file-like input telling it what should come from which version.
- Of course this merged shit would not run or even compile just like that, would it? So I implemented macros where you'd pass to functions, instead of vector<float>&, a REFERENCE(vector<float>), and a horrendous bulk of code making this work at runtime when you actually passed a vector<InteliFixed> (which the code inside the function then tried to treat as a vector<float>.)
- And apart from all that meta-programming, there was the programming itself of course. For example, solving 5Γ5 equation systems to fit polynomials to noisy data points, in fixed point. I managed to get this to work using hideous normalization tricks and assembly code using something like 96 bits of integer precision. My code even worked better than single-precision floating point without normalization! Yay!
For months and months, I worked as hard as ever, cranking out as much complicated, working code as ever.
And here's what I should have done:
- Convince management to put the damned hardware floating point unit into the damned chip. It didn't cost that many square millimeters of silicon β I should have insisted on finding out how many. (FPUs were added in the next chip generation.)
- Failing that, lay my hands on the chip simulator, measure the cost of floating point emulation, and use it wherever it was affordable. (This is what we ended up doing in many places.)
- Tell my boss that maintaining two versions in sync like he wanted isn't going to work β they're going to diverge completely, so that no tool in hell will be able to partially merge them and run the result. (Of course this is exactly what happened.)
Why did this end up in many months of shit work instead of doing the right thing? Because I didn't know what's what, because I didn't think I could argue with my management, and because the work was challenging and interesting. It then promptly went down the toilet.
The hardest part of "managing" these 10x folks β people widely known as extremely productive β is actually convincing them to work on something. (The rest of managing them tends to be easy β they know what's what; once they decide to do something, it's done.)
You'd expect the opposite, kind of, right? I mean if you're so productive, why do you care? You work quickly; the worst thing that happens is, nothing comes out of it β then you'll just do the next thing quickly, right? I mean it's the slow, less productive folks that ought to be picky β they're slower and so get less shots at new stuff to work on to begin with, right?
But that's the optical illusion at work: the more productive folks aren't that much quicker β not 10x quicker. The reason they appear 10x quicker is that almost nothing they do is thrown away β unlike a whole lot of stuff that other people do.
And you don't count that thrown-away stuff as productivity. You think of a person as "the guy who did X" where X was famously useful β and forget all the Ys which weren't that useful, despite the effort and talent going into those Ys. Even if something else was "at fault", like the manager, or the timing, or whatever.
To pick famous examples, you remember Ken Thompson for C and Unix β but not for Plan 9, not really, and not for Go, not yet β on the contrary, Go gets your attention because it's a language by those Unix guys. You remember Linus Torvalds even though Linux is a Unix clone and git is a BitKeeper clone β in fact because they're clones of successful products which therefore had great chances to succeed due to good timing.
The first thing you care about is not how original something is or how hard it was to write or how good it is along any dimension: you care about its uses.
The 10x programmer will typically fight very hard to not work on something that is likely enough to not get used.
One of these wise guys asked me the other day about checkedthreads which I've just finished, "so is anyone using that?" with that trademark irony. I said I didn't know; there was a comment on HN saying that maybe someone will give it a try.
I mean it's a great thing; it's going to find all of your threading bugs, basically. But it's not a drop-in replacement for pthreads or such, you need to write the code using its interfaces β nice, simple interfaces, but not the ones you're already using. So there's a good chance few people will bother; whereas Helgrind or the thread sanitizer, which have tons of false negatives and false positives, at least work with the interfaces that people use today.
Why did I bother then? Because the first version took an afternoon to write (that was before I decided I want to have parallel nested loops and stuff), and I figured I had a chance because I'd blog about it (as I do, for example, right now). If I wrote a few posts explaining how you could actually hunt down bugs in old-school shared-memory parallel C code even easier than with Rust/Go/Erlang, maybe people would notice.
But there's already too much chances of a flop here for most of the 10x crowd I personally know to bother trying. Even though we use something like checkedthreads internally and it's a runaway success. In fact the ironic question came from the guy who put a lot of work in that internal version β because internally, it was very likely to be used.
See? Not working on potential flops β that's productivity.
How to pick what to work on? There are a lot of things one can look at:
- Is there an alternative already available? How bad is it? If it's passable, then don't do it β it's hard to improve on a good thing and even harder to convince that improvements are worth the switch.
- How "optional" is this thing? Will nothing work without it, or is it a bell/whistle type of thing that can easily go unnoticed?
- How much work do users need to put in to get benefits? Does it work with their existing code or data? Do they need to learn new tricks or can they keep working as usual?
- How many people must know about the thing for it to get distributed, let alone used? Will users mostly run the code unknowingly because it gets bundled together with code already distributed to them, or do they need to actively install something? (Getting the feature automatically and then having to learn things in order to use it is often better than having to install something and then working as usual. Think of how many people end up using a new Excel feature vs how many people use software running backups in the background.)
- How much code to deliver how much value? Optimizing the hell out of a small kernel doing mpeg decompression sounds better than going over a million lines of code to get a 1.2x overall speed-up (even though the latter may be worth it by itself; it just necessarily requires 10x the programmers, not one "10x programmer").
- Does it have teeth?If users do something wrong (or "wrong"), does it silently become useless to them (like static code analysis when it no longer understands a program), or does it halt their progress until they fix the error (like a bounds-checked array)?
- ...
You could easily expand this list; the basic underlying question is, what are the chances of me finishing this thing and then it being actually used? This applies recursively to every feature, sub-feature and line of code: does it contribute to the larger thing being used? And is there something else I could do with the time that would contribute more?
Of course it's more complicated than that; some useful things are held in higher regard than others for various reasons. Which is where Richard Stallman enters and requires us to call Linux "GNU/Linux" because GNU provided much of the original userspace stuff. And while I'm not going to call it "Gah-noo Lee-nux", there's sadly some merit to the argument, in the sense that yeah, unfortunately some hard, important work is less noticed than other hard, important work.
But how fair things are is beside the point. After all, it's not like 10x the perceived productivity is very likely to give you 10x the compensation. So there's not a whole lot of reasons to "cheat" and appear more productive than you are. The main reason to be productive is because there's fire raging up one's arse, more than any tangible benefit.
The point I do want to make is, to get more done, you don't need to succeed more quickly (although that helps) as much as you need to fail less often. And not all failures are due to lack of knowledge or skill; most of them are due to quitting before something is actually usable β or due to there being few chances for it to be used in the first place.
So I believe, having authored a lot of code that went down the toilet, that you don't get productive by working as much as by not working β not on stuff that is likely to get thrown away.