Entries Tagged 'wetware' ↓
March 19th, 2014 — wetware
- I'm not a citizen of a country where "getting more people to code" is a widely shared concern. While my proposal seems to me like a simple solution to a simple problem, this is merely an outsider's observation.
- Recently the tide of calls to "code" seems to have subsided. If it's no longer a pressing social problem, I regret being late with my excellent solution.
- Like all brilliant ideas, my solution is simple, and I was surprised by not seeing it proposed in the voluminous discussions of the topic. I apologize to those who've proposed the same thing earlier without me noticing for not giving them proper credit.
For a long time now, people have been voicing a concern about a supposed shortage of computer programmers. Many also express narrower concerns, such as a shortage of children, women or people with a certain skin color in the ranks of computer programmers.
A high-profile example is US President Barack Obama. This US President went further than urging others to "code" and regretted not being able to do so himself: "I wanted to go in and fix <healthcare.gov> myself, but I don't write code."
(I understand President Obama. I once wanted to go in and fix international politics. Then I realized that I don't shoot bullets.)
People from technology companies have also urged others to learn programming – for example, Mark Zuckerberg and Bill Gates.
Finally, individuals such as NBA star Chris Bosh encouraged people to learn to code in their private capacity.
Even this humble blogger was asked to give an interview by someone interested to encourage people in general and women in particular to code! And you know why? Surprisingly, because of a piece I wrote where I told that it was money that attracted me to programming, and it is money that keeps me in programming.
It was then when a solution to the problem at hand popped to my mind.
A possible solution
If there's a shortage of programmers, we could pay programmers more money.
How can this be arranged, and how would it solve the problem? There are several possibilities.
For example, companies such as Google, Apple and Intel could raise wages, causing some people to choose programming over other occupations. Once a sufficient amount of people are attracted to programming, wages would fall.
Alternatively, governments such as that headed by Barack Obama could lower programmers' income tax, or introduce a negative tax for programming. Again rising wages would attract more people to programming. A government convinced that the number of programmers reached a high enough level could then cancel the subsidy.
Finally, concerned individuals such as the wealthy basketball player Chris Bosh could donate money to computer programmers until enough people are attracted to the profession.
Having outlined a solution to the general problem, we now direct our attention to the narrower concerns of representation of particular groups of people in computer programming.
Discrimination affirmative action
Companies could increase the representation of younger people in programming by paying extra money to programmers below a certain age.
Similarly, governments could give tax breaks to younger programmers, or to parents of children who regularly submit their code for review by public officials.
Finally, a trust fund could be set up paying children to learn to program.
A similar approach should in principle be applicable to the problems of increasing the representation of women, people of particular races or other groups.
In some cases, questions of legality arise; for instance, while a trust fund for the benefit of people of one race but not others is probably legal, for-profit companies and governments might not be able to legally discriminate based on race.
However, this problem could be overcome, either by changing the laws or by setting aside a sum of money equal in size to the discriminatory subsidy paid to the members of the group in question. Once enough people from that group enter the programming profession, that sum of money can be divided between working programmers not belonging to the group, in effect undoing the discrimination.
Or something along these lines.
If this strikes you as absurd, does it strike you as even more absurd that people claim something to be a problem when its "solution" is as obvious as it is ridiculous? (Or is it really that ridiculous? Farm subsidies exist. Why not FarmVille subsidies?)
If people don't "learn to code", maybe the option is insufficiently attractive to those who can, given current wages, lifetime employment prospects, and the complete uselessness of the skill outside work.
If companies don't raise wages, maybe they don't feel a very pressing need for more programmers. (Of course they'd still "encourage" people to program – as long as the cost of encouragement is likely to be offset by a fall of wages, once a surplus of programmers results from the encouragement.)
If a situation persists, maybe there's a good reason for it.
 It is not completely fair to deny programming its uses outside work. A more fair presentation is an analogy with today's hobbyist 3D printing. An owner of a 3D printer recently told me that "having one really exposes the impotence of… not having one. For instance, I needed this little thingie to hold a shelf. Took 30 minutes to design and print. And where would I get it otherwise?!" The answer, of course, is "at a nearby store" where they have a box full of these thingies at about 20 cents apiece. Of course, in a couple thousand years, his investment in the 3D printer will repay itself though the continuous printing of thingies.
Women in Science by Philip Greenspun is a must-read for anyone interested in increasing the representation of group X in occupation Y:
Adjusted for IQ, quantitative skills, and working hours, jobs in science are the lowest paid in the United States.
This article explores this … possible explanation for the dearth of women in science: They found better jobs.
July 12th, 2013 — wetware
This is a (very late) reply to Patrick McKenzie's "Don't Call Yourself A Programmer, And Other Career Advice". I find much of his advice very sensible, and it might be very helpful to someone in the beginning of their career – assuming they can act upon it (and I really don't know whether my 20-year-old self could actually use the advice to improve his negotiation skills, for example).
A few things in the article I disagree with, however. Here I'll mostly focus on those few things, recommending you to read the original article so that you don't miss the rest of it.
"Disagree" is not necessarily the right word – a more precise way to put it would be "it's different in my experience". Which is to be expected because both of us are speaking based on our own careers, which have been rather different. Patrick McKenzie is a small business owner running Bingo Card Creator and a successful consultant. I'm a lead chip architect at a billion-dollar company. Both of us have thus traveled some distance away from "purely programming" (whatever that means), but in rather different directions.
What company are you going to work for?
Patrick McKenzie says 90% of the jobs involve things like implementing an internal travel expense reporting form, rather than a product shipped to external customers. He advises you to get used to the idea, even though such software is "soul-crushingly boring" as he puts it.
How bad is it, and is it really 90% of the jobs? Spolsky thinks it's maybe 80% – and that it's bad enough to "drain the life out of you". He goes on to elaborate why it "sucks to be an in-house programmer":
- There's rarely a business reason to improve in-house software past the point of "barely good enough". "Forget any pride of craftsmanship – you're going to churn out embarrassing junk".
- At software companies, what you do is more directly related to the way the company makes money, so you're more likely to be respected. "A programmer is never going to rise to become CEO of Viacom, but you might well rise to become CEO of a tech company." "…no matter how critical it was for Viacom to get this internet thing right, when it came time to assign people to desks, the in-house programmers were stuck with 3 people per cubicle in a dark part of the office".
Note that McKenzie and Spolsky are in almost complete agreement over these points. But then Spolsky says you should be gunning for a position in a software company – the environment where creatures of your kind naturally thrive. Conversely, McKenzie explains how to prosper as a programmer outside software companies – moving in the opposite direction of where things go by default (being stuck in a dark part of the office while they're trying to outsource your job.)
So the question is which path you prefer. "Not so fast", you say: one of these jobs is way easier to land – 80-90% of the chances are you're not getting inside a software company – so it's not just a question of preference.
Here I disagree: even if only 10-20% of programmers work in software companies (where are the stats?..), and even if they're "the best" (according to what metric?), McKenzie himself says in that same article:
You radically overestimate the average skill of the competition because of the crowd you hang around with: Many people already successfully employed as senior engineers cannot actually implement FizzBuzz.
But if competition is relatively unskilled on average, you probably can land a job in the 10-20% of the sector that you want – as did most people who graduated around the time I did. So I rather firmly believe that it's a matter of choice: do you want to work on in-house software or one-off businessy projects of that kind, or do you prefer a software company?
Let's proceed to McKenzie's advice to in-house programmers – which should in itself help one make that choice.
How to call yourself
One such advice is:
Don't call yourself a programmer. “Programmer” sounds like “anomalously high-cost peon who types some mumbo-jumbo into some other mumbo-jumbo.” Instead, describe yourself by what you have accomplished for previous employers vis-a-vis increasing revenues or reducing costs.
Sure – an in-house programmer is likely doing some type of expensive mumbo-jumbo in the eyes of his non-technical MBA-wielding manager.
To me, however, a programmer is who I'm looking for, while a resume full of revenue increases and cost reductions sounds like an "anomalously high-cost parasite who types some mumbo-jumbo into Excel and PowerPoint, claiming credit for others' work".
McKenzie says a software company looks at this just like a company hiring internal programmers, essentially. His example is "the guy who wrote the backend billing code that 97% of Google’s revenue passes through – he’s now an angel investor". The guy apparently got rich by being near a "profit center" rather than through his unusual skills.
The thing is, in this case I believe he's talking about Ron Garret, the PhD from NASA's Jet Propulsion Laboratory. Do you think they hired him because he described his work at the JPL in terms of revenues and costs? (BTW he didn't like working on the billing code, bought his stock options and quit, instead of choosing a career at the company's biggest "profit center".)
Did any unusual skills go into the billing code? Ron Garret says:
I did end up writing the credit card billing and accounting system, which is a nontrivial thing to get right. Fortunately for me, just before coming to Google I had taken some time to study computer security and cryptography, so I was actually well prepared for that particular task. …I designed the billing system to be secure against even a dishonest employee with root access (which is not such an easy thing to do). I have no idea if they are still using my system, but if they are then I'd feel pretty confident that my credit card number was not going to get stolen.
Sounds to me that his technical knowledge and programming ability was the bulk of his contribution, whereas deep thoughts such as realizing that there will be some "cost reduction" due to not having credit card numbers stolen is not something an employer needs to hire anyone for.
So if I ever send out a resume as a chip architect, I will focus on my technical role in transitioning from fixed-function hardware accelerators to programmable processors, more than the manpower this saved and the business we won as a result (which I think were real outcomes of our work, but which is rather hard to quantify – as these things often are unless you're a business-friendly-sounding liar.)
Incidentally, I'm not sure when I'll send out that resume, which brings us to the next point.
On job hopping, backstabbing, and the lack thereof
Co-workers and bosses are not usually your friends: You will spend a lot of time with co-workers. You may eventually become close friends with some of them, but in general, you will move on in three years…
<your boss will> attempt to do things that none of your actual friends would ever do, like try to talk you down several thousand dollars in salary or guilt-trip you into spending more time with the company when you could be spending time with your actual friends. You will have other coworkers who — affably and ethically — will suggest things which go against your interests…
There is a certain internal consistency to a view that your coworkers are not your friends, because you will move on in 3 years. In fact, it's a bit circular. They aren't your friends – because you'll move on. And why will you move on? Well, I dunno, maybe for a 10% salary increase. What's there to lose? Relationships with coworkers? But coworkers aren't your friends!
Again, I don't disagree, but rather offer an alternative view, equally internally consistent. I have stayed at one job for more than a decade, in large part because I'm rather attached to the people I work with. To be sure, I got raises, and I was ready to quit over employment terms – but it'd take much more than 10%.
Isn't it just a quantitative difference in preferences – a 10% raise not being fundamentally different than, say, 100%? Well, sufficiently large quantitative changes add up to qualitative changes, as Marxian dialectics or some other Soviet philosophy thingie that my parents sometimes quote taught us. What's going on is that both approaches can lead to career advancement, but they do so very differently.
If you're willing to change jobs over a small raise, you'll be changing them frequently. You won't get attached to people, or to the work you're doing together. You will be very good at finding jobs and you will know what's generally going on in the industry and what's in demand. You will not know that many things specific to any of your employers. You and your employer will become very useful to each other fairly quickly, but you'll also be somewhat expendable for each other.
Alternatively, you can keep a job as long as it's a fun environment, requiring a significant raise once in a while. Your relationships with people combined with your long-term outlook can let you do things together that you otherwise couldn't plan or execute, and learn things you wouldn't have learned.
Much of my knowledge about chip design comes from ASIC hackers I worked with, and their willingness to develop their biggest ideas together with me came from trust that necessarily took time to build. It takes time to learn that none of you is in the habit of "suggesting things going against the other's interest", or pulling other unfriendly shenanigans.
Incidentally, if you stay at one place for a long while, then your worth to the employer grows to the point where you can get the significant raise that you'd quit over without actually quitting. Your worth can also grow well above what employers are willing to pay to experienced new hires, so there's no longer a point in switching jobs. This is somewhat analogous to becoming a consultant after having switched a whole lot of jobs and now making more than the next job hop could give you.
Both approaches work, though I don't have stats showing which tends to be more effective. I do believe that the long-term approach is more fun. I could never land the kind of gig that I have now through job hopping. More importantly, I wouldn't have the relationships that I have at work.
"More importantly", because all means to reach our ends often fail, and then all we're left with is our means. You can't count on any career strategy to give you either a dream job or a load of money; it'll work to some extent or other but who knows. What you can count on is your lifestyle being affected rather predictably by your career choices. The impact of these choices on relationships could thus be weighted as more important than the impact on career advancement because it's more predictable.
The part about bosses is the only one I very much agree with. (I had enough bosses to be able to plausibly deny that I'm thinking about any particular one here.) Yes, some of them will want you to work more time for less money (by itself a natural desire for an employer) while attempting to look like your friends (which is where it becomes a tad irksome). This just means that you should guard your own interests (as always) – and perhaps not judge people too harshly before spending time in their shoes.
How to value an equity grant
McKenzie says you shouldn't value equity very much, and he doesn't spend many of words to say it. I'll talk about stock options, which are worse than an actual equity grant and which is the only thing I've ever been offered.
My basic outlook is again long-term. I work at a private company whose value rose almost tenfold over the decade I've been there. And it's still a private company, so there's never been an easy venue to make money off most of the stock options.
From a long-term view, stock options look worse – and better.
Worse, because having stock options ties your hands behind your back. You usually can't afford to buy them when you quit, or at least buying them is a significant risk that you might be reluctant to take. If the company survives for a long while, then you may start to dislike the place but the hope of making money off your stock options now makes it harder to quit. If you generally like the place, options make it harder to negotiate a raise, since they know you can't quit.
So in the long term, options can effectively be a liability.
On the other hand, as the company matures, its stock options tend to get undervalued by employees, and for no good reason. People intuitively think along the lines of, "it's already expensive – how much can a price rise from now on?" It's a natural thought if the price has went up threefold or tenfold already.
But what this misses is that you don't get paid in percentage points – you get dollars. A $100 share going up 20% to $120 means you make $20 per share. A $5 share going up 100% to $15 means you only make $10 per share. Stock options of a mature company whose price is still rising can thus be even nicer than stock options of a young company which rises more quickly but which is still cheap – and is more likely to go bust overnight.
The upshot is that people overvalue stock options early on – but they also often undervalue them later on.
Note that if you don't intend to stay for more than 3 years, than stock options are most certainly a liability because they make it harder to quit – while the chances that the company makes it big in that span of time are very low.
Working at a startup
McKenzie lists valid reasons not to. In terms of job satisfaction, he says you can work on many exciting things in large corporations, not just startups.
Here's one thing in favor of startups. A large corporation usually doesn't have huge gaping holes that it doesn't know how to deal with or doesn't even notice. A startup often does have many such gaping holes, because, well, nothing is established yet, they don't even understand what they're doing, and most importantly, they are severely understaffed.
This means that you can grab pretty much any responsibility that you want to. There will be areas that people are competing to work on everywhere, but in a startup doing something hard enough, there will be a ton of hard problems nobody is competing to solve because there's not enough time or people for everything. You can be the person pointing out that problem and grabbing that responsibility.
As companies mature, being able to just work on whatever you want gets harder. My metaphor for it is nomadic programmers moving from problem to problem vs settlers with states and national borders where even visiting your neighbor's code may involve a visa.
This isn't a recommendation to work for startups, just one thing worth pointing out. The counterpoint is that if you're an orderly person who wants an orderly process, then a larger company known for its development culture is probably a better idea.
Impact of career on life happiness
At the end of the day, your life happiness will not be dominated by your career.
In one way, I agree wholeheartedly; whatever the merits of a job, it's a job, and I actually noticed my productivity fall at times of treating it as more important than that. The healthy way of looking at it is "just a job, at the end of the day".
On the other hand, we do spend quite some time at work. The question is, to what extent does it make sense to separate "work" from "life" – and to what extent it's one part of life among many, to be treated similarly to those other parts of life?
I argue that the "work/life" separation shouldn't be strong enough to separate "coworkers" into a distinct category of human beings with whom relationships are formed fundamentally differently – nor is it necessarily great to be emotionally detached from the workplace to be always ready to abandon it and "move on".
(I'm not arguing that McKenzie's intent was to say the exact opposite of what I'm saying, BTW. I'm just commenting on some quotes and the general atmosphere of the text as I perceived it. A lot of things simply have different meaning when heard by different people; a simple advice like "be wary of others' intentions" is great for someone overly trusting, but not for someone already verging on paranoia. Some people need to hear that coworkers aren't friends; today I'm writing for the other people.)
When I introduce myself, I usually call myself a programmer, regardless of my current work on chip architecture and management and stuff. I got into programming for the money, so it's not like I'm overflowing with pride when uttering "programmer". I just think programming is a great career and the right thing to call myself for me.
There's an alternative approach where you program, but you don't call it that, and you use programming as a starting point from which you transition to some form of being involved in business as directly as possible.
It sounds a bit roundabout to me – why not just get an MBA instead? – but maybe it's the right path for some (especially considering that some prestigious MBA programs want you to have industry experience before you can even enroll.)
The important thing is to choose the path that suits your preferences, follow it consistently, and realize where your approach is most likely to succeed. Because where I work, someone applying for a programming position and not calling himself a programmer will not make a good impression.
I agree emphatically with many of the points in McKenzie's article – my favorite point is the importance of communication skills – and I very much recommend it.
April 19th, 2013 — wetware
There's this common notion of "10x programmers" who are 10x more productive than the average programmer. We can't quantify productivity so we don't know if it's true. But definitely, enough people appear unusually productive to sustain the "10x programmer" notion.
How do they do it?
People often assume that 10x more productivity results from 10x more aptitude or 10x more knowledge. I don't think so. Now I'm not saying aptitude and knowledge don't help. But what I've noticed over the years is that the number one factor is 10x more selectivity. The trick is to consistently avoid shit work.
And by shit work, I don't necessarily mean "intellectually unrewarding". Rather, the definition of shit work is that its output goes down the toilet.
I've done quite a lot of shit work myself, especially when I was inexperienced and gullible. (One of the big advantages of experience is that one becomes less gullible that way – which more than compensates for much of the school knowledge having faded from memory.)
Let me supply you with a textbook example of hard, stimulating, down-the-toilet-going work: my decade-old adventures with fixed point.
You know what "fixed point arithmetic" is? I'll tell you. It's when you work with integers and pretend they're fractions, by implicitly assuming that your integer x actually represents x/2^N for some value of N.
So to add two numbers, you just do x+y. To multiply, you need to do x*y>>N, because plain x*y would represent x*y/2^2N, right? You also need to be careful so that this shit doesn't overflow, deal with different Ns in the same expression, etc.
Now in the early noughties, I was porting software to an in-house chip which was under development. It wasn't supposed to have hardware floating point units – "we'll do everything in fixed point".
Here's a selection of things that I did:
- There was a half-assed C++ template class called InteliFixed<N> (there still is; I kid you not). I put a lot of effort into making it, erm, full-assed (what's the opposite of half-assed?) This included things like making operator+ commutative when it gets two fixed point numbers of different types (what's the type of the result?); making sure the dreadful inline assembly implementing 64-bit intermediate multiplications inlines well; etc. etc.
- My boss told me to keep two versions of the code – one using floating point, for the noble algorithm developers, and one using fixed point, for us grunt workers fiddling with production code. So I manually kept the two in sync.
- My boss also told me to think of a way to run some of the code in float, some not, to help find precision bugs. So I wrote a heuristic C++ parser that automatically merged the two versions into one. It took some functions from the "float" version and others from the "fixed" version, based on a header-file-like input telling it what should come from which version.
- Of course this merged shit would not run or even compile just like that, would it? So I implemented macros where you'd pass to functions, instead of vector<float>&, a REFERENCE(vector<float>), and a horrendous bulk of code making this work at runtime when you actually passed a vector<InteliFixed> (which the code inside the function then tried to treat as a vector<float>.)
- And apart from all that meta-programming, there was the programming itself of course. For example, solving 5×5 equation systems to fit polynomials to noisy data points, in fixed point. I managed to get this to work using hideous normalization tricks and assembly code using something like 96 bits of integer precision. My code even worked better than single-precision floating point without normalization! Yay!
For months and months, I worked as hard as ever, cranking out as much complicated, working code as ever.
And here's what I should have done:
- Convince management to put the damned hardware floating point unit into the damned chip. It didn't cost that many square millimeters of silicon – I should have insisted on finding out how many. (FPUs were added in the next chip generation.)
- Failing that, lay my hands on the chip simulator, measure the cost of floating point emulation, and use it wherever it was affordable. (This is what we ended up doing in many places.)
- Tell my boss that maintaining two versions in sync like he wanted isn't going to work – they're going to diverge completely, so that no tool in hell will be able to partially merge them and run the result. (Of course this is exactly what happened.)
Why did this end up in many months of shit work instead of doing the right thing? Because I didn't know what's what, because I didn't think I could argue with my management, and because the work was challenging and interesting. It then promptly went down the toilet.
The hardest part of "managing" these 10x folks – people widely known as extremely productive – is actually convincing them to work on something. (The rest of managing them tends to be easy – they know what's what; once they decide to do something, it's done.)
You'd expect the opposite, kind of, right? I mean if you're so productive, why do you care? You work quickly; the worst thing that happens is, nothing comes out of it – then you'll just do the next thing quickly, right? I mean it's the slow, less productive folks that ought to be picky – they're slower and so get less shots at new stuff to work on to begin with, right?
But that's the optical illusion at work: the more productive folks aren't that much quicker – not 10x quicker. The reason they appear 10x quicker is that almost nothing they do is thrown away – unlike a whole lot of stuff that other people do.
And you don't count that thrown-away stuff as productivity. You think of a person as "the guy who did X" where X was famously useful – and forget all the Ys which weren't that useful, despite the effort and talent going into those Ys. Even if something else was "at fault", like the manager, or the timing, or whatever.
To pick famous examples, you remember Ken Thompson for C and Unix – but not for Plan 9, not really, and not for Go, not yet – on the contrary, Go gets your attention because it's a language by those Unix guys. You remember Linus Torvalds even though Linux is a Unix clone and git is a BitKeeper clone – in fact because they're clones of successful products which therefore had great chances to succeed due to good timing.
The first thing you care about is not how original something is or how hard it was to write or how good it is along any dimension: you care about its uses.
The 10x programmer will typically fight very hard to not work on something that is likely enough to not get used.
One of these wise guys asked me the other day about checkedthreads which I've just finished, "so is anyone using that?" with that trademark irony. I said I didn't know; there was a comment on HN saying that maybe someone will give it a try.
I mean it's a great thing; it's going to find all of your threading bugs, basically. But it's not a drop-in replacement for pthreads or such, you need to write the code using its interfaces – nice, simple interfaces, but not the ones you're already using. So there's a good chance few people will bother; whereas Helgrind or the thread sanitizer, which have tons of false negatives and false positives, at least work with the interfaces that people use today.
Why did I bother then? Because the first version took an afternoon to write (that was before I decided I want to have parallel nested loops and stuff), and I figured I had a chance because I'd blog about it (as I do, for example, right now). If I wrote a few posts explaining how you could actually hunt down bugs in old-school shared-memory parallel C code even easier than with Rust/Go/Erlang, maybe people would notice.
But there's already too much chances of a flop here for most of the 10x crowd I personally know to bother trying. Even though we use something like checkedthreads internally and it's a runaway success. In fact the ironic question came from the guy who put a lot of work in that internal version – because internally, it was very likely to be used.
See? Not working on potential flops – that's productivity.
How to pick what to work on? There are a lot of things one can look at:
- Is there an alternative already available? How bad is it? If it's passable, then don't do it – it's hard to improve on a good thing and even harder to convince that improvements are worth the switch.
- How "optional" is this thing? Will nothing work without it, or is it a bell/whistle type of thing that can easily go unnoticed?
- How much work do users need to put in to get benefits? Does it work with their existing code or data? Do they need to learn new tricks or can they keep working as usual?
- How many people must know about the thing for it to get distributed, let alone used? Will users mostly run the code unknowingly because it gets bundled together with code already distributed to them, or do they need to actively install something? (Getting the feature automatically and then having to learn things in order to use it is often better than having to install something and then working as usual. Think of how many people end up using a new Excel feature vs how many people use software running backups in the background.)
- How much code to deliver how much value? Optimizing the hell out of a small kernel doing mpeg decompression sounds better than going over a million lines of code to get a 1.2x overall speed-up (even though the latter may be worth it by itself; it just necessarily requires 10x the programmers, not one "10x programmer").
- Does it have teeth?If users do something wrong (or "wrong"), does it silently become useless to them (like static code analysis when it no longer understands a program), or does it halt their progress until they fix the error (like a bounds-checked array)?
You could easily expand this list; the basic underlying question is, what are the chances of me finishing this thing and then it being actually used? This applies recursively to every feature, sub-feature and line of code: does it contribute to the larger thing being used? And is there something else I could do with the time that would contribute more?
Of course it's more complicated than that; some useful things are held in higher regard than others for various reasons. Which is where Richard Stallman enters and requires us to call Linux "GNU/Linux" because GNU provided much of the original userspace stuff. And while I'm not going to call it "Gah-noo Lee-nux", there's sadly some merit to the argument, in the sense that yeah, unfortunately some hard, important work is less noticed than other hard, important work.
But how fair things are is beside the point. After all, it's not like 10x the perceived productivity is very likely to give you 10x the compensation. So there's not a whole lot of reasons to "cheat" and appear more productive than you are. The main reason to be productive is because there's fire raging up one's arse, more than any tangible benefit.
The point I do want to make is, to get more done, you don't need to succeed more quickly (although that helps) as much as you need to fail less often. And not all failures are due to lack of knowledge or skill; most of them are due to quitting before something is actually usable – or due to there being few chances for it to be used in the first place.
So I believe, having authored a lot of code that went down the toilet, that you don't get productive by working as much as by not working – not on stuff that is likely to get thrown away.
February 8th, 2013 — wetware
Q: In retrospect, wasn't the decision to trade off programmer efficiency, security, and software reliability in exchange for runtime performance a fundamental mistake?
A: Well, I don’t think I made such a tradeoff. I want elegant and efficient code. Sometimes I get it. The efficiency vs. correctness, efficiency vs. programmer time, efficiency vs. high level, etc. dichotomies are largely bogus.
— An interview with Bjarne Stroustrup
Unlimited-precision symbolic computation is more elegant than floating point numbers. You simply never have any numerical stability problems. Anything algebraically correct – a valid way to solve for x given the equations involving x – is also computationally correct. You don't need to know all the quirks of floating point, and you won't need "numerical recipes" which are basically ways to deal with these quirks.
Symbolic computation is rather widely available – say, in Mathematica, Matlab/maple, etc. – but it's not used nearly as much as floating point. That's because floating point is much more efficient, and a whole lot of things can not be done in a reasonable amount of time and space with symbolic computation.
It is undeniable that this efficiency comes at a cost of correctness (in terms of increased likelihood of bugs), programmer time, and elegance. There are plenty of algebraically elegant solutions which just don't work in floating point. If you don't notice, you have a bug; if you do notice, you spend your (programmer) time looking for an alternative, and said alternative may be less elegant.
Floating point is not the lowest level we can sink to. In many cases the quest for still more efficiency brings us to the dark quagmires of fixed point arithmetic. That's when you have an integer and you implicitly assume it's divided by 2^N – the exponent is statically known so the point isn't floating at run time, so to say. So to implement a+b you just use integer addition, and a*b is an integer multiplication followed by a right shift by N. Or there are midway scenarios where you have many integers with a common, dynamically computed exponent because they all have roughly the same range (FFT is one case where this is often done.)
Fixed point is so ugly that there's not even a recipes book that I'm aware of; it just doesn't come out tasty in the slightest. The biggest trouble isn't even the very likely overflow but the loss of precision: floating point guarantees a certain number of significant bits in the mantissa, while fixed point doesn't – unless you explicitly normalize the number at some point using CLZ or similar (making it more of a floating point emulation than "true" fixed point where exponents aren't represented at runtime). You think you have a 32-bit mantissa and an implicit exponent so the number is rather precise – but those 32 bits can have most of the high bits set to zero and then it's not precise at all.
However, a mixture of 8-bit, 16-bit and 32-bit fixed point operations often beats the performance of floating point by a large margin; especially if you have SIMD instructions, because you can always fit 4x more 8-bit numbers into a register than 32-bit numbers, and a single-precision floating point multiplier is ~10x more costly in hardware than an 8-bit integer multiplier so you have less of them.
Again, this efficiency comes at a cost of correctness (in terms of increased likelihood of bugs), programmer time, and elegance.
In computer vision, you're often looking for objects of a certain class, and you have a classifier taking a rectangular image region and telling whether this region contains an object of that class. A simple and elegant object detection algorithm is to apply this classifier to every possible rectangle in the image, and then remove rectangles which mostly overlap (as in, if there are 15 similar rectangles saying there's a face in roughly the same place, make one rectangle out of them all).
This elegant algorithm is never used, because there are too many possible rectangles (every coordinate times every size). A common optimization is to use a cascade of classifiers. That is, apply a very cheap classifier with a lot of false positives but hopefully almost no false negatives to every region. The purpose is to throw away most of the rectangles so that the remaining smaller set still contains all the true positives – and a lot of false positives, of course, but much less.
This is repeated with many (possibly increasingly expensive) classifiers processing increasingly smaller sets of rectangles. The most widely deployed classifier cascade is probably the Viola-Jones face detector, currently available in most digital cameras displaying little squares around faces. As you could have noticed, it often misses a face, which is to be expected with all the little classifiers hurrying to throw rectangles away. And which is OK for a consumer application where a success rate of 90-95% is perfectly fine and an extra 1% of detection rate is not worth a $0.01 increase in price. The point is that the error rate is undeniably increased by stricter efficiency requirements.
The upshot is that object detection provides a broad family of examples where, again, runtime efficiency comes at a cost of correctness (in terms of increased likelihood of bugs – there's much more code to write – as well as the ultimate detection rate), programmer time, and elegance.
(Even the smallish sub-problem of merging overlapping rectangles provides an example where efficiency has to be bought with all those other things including elegance. A short, readable, elegant solution could use an O(N^2) nested loop where each rectangle is intersected with every other rectangle. One optimization is some sort of spatial data structure where you don't look at rectangles if they don't fall into the same bin of the spatial subdivision because then they can't intersect. That's faster, more buggy and less readable.)
Does this have anything to do with the quote by Stroustrup though? His implied point was how the std::sort template is more elegant as well as more efficient than C's qsort function fiddling with void pointers, right? Or ostream vs printf? Whereas these are all examples of "algorithmic efficiency" – is that even related to language design?
Well, the thing is, "algorithms" and "languages/code" is a continuum:
Think of all the psychic energy expended in seeking a fundamental distinction between "algorithm" and "program". — Alan Perlis
Given that it's a continuum, it is doubtful that a statement which is profoundly wrong in an "algorithmic" context could be true in a "programming" context. If the tradeoff between runtime efficiency and programmer efficiency/"elegance" is fundamental from an algorithmic point of view, then it's likely fundamental in computing in general.
For a concrete example of how blurry the line between "algorithmic efficiency" and "code efficiency" is, let's discuss corner detection. The FAST corner detector is a decision tree looking at pixels surrounding the central pixel and comparing the image intensity of the center to its surroundings. Similarly to other classifier cascades, "not a corner" is a quick decision, while "yes, a corner" is decided after all the checks are done.
The decision tree is implemented in several thousands of auto-generated C code lines with gotos. (That's one addition to the recent discussion about the utility of gotos in systems programming; add computer vision to the list of goto applications, I guess.)
Is it possible to implement the decision tree in a more elegant and readable way? Of course – but at the cost of efficiency; not asymptotic efficiency since it'd be the same decision tree, but efficiency nonetheless.
Is this goto business an "algorithmic" optimization or a "program" optimization? Consider that FAST's entire raison d'etre is being faster than, say, the Harris corner detector. Constants matter for high-resolution images processed in real time.
Consider furthermore that both FAST and Harris are O(#pixels) since they look at a finite, small number of pixels around each coordinate and execute a finite, small number of operations. Consider that which is more efficient depends on the platform – SIMD helps speed up Harris but not FAST, and different SIMD instruction sets speed it up by very different factors. (This is also true for linear classifiers vs Viola Jones and for other cases.) And consider the fact that algorithmically, they're wildly different – Harris looks at eigenvectors whereas FAST is an intensity-based decision tree, they have tunable decision thresholds with different meaning, and different sets of false positives and false negatives.
So is FAST a work in the area of "algorithms" or "programming", and is the auto-generated mountain of code essential to make it efficient an "algorithm" or a "program"? My answer is that it's both, in the sense that you can't really draw the line.
But what about std::sort and C++'s combination of efficiency and elegance? Well, C++ rather obviously does pay with programmer efficiency for runtime efficiency, without an option to opt out of the deal. Every allocation can leak, every reference can be dangling, every buffer can overflow, etc. etc. etc.
This blindingly obvious fact doesn't surprise those who realize the fundamental tradeoff between efficiency and a whole lot of other things, some of which can be collectively called "elegance". Whereas those refusing to believe in such a tradeoff manage to not even notice the consequences. For example:
The relatively small size of the C++ standard library – primarily reflecting the lack of resources in the ISO C++ standards committee – compared with the huge corporate libraries can be a real disadvantage for C++ developers compared to users of proprietary languages.
…So why do languages without corporate backing which are 2 to 3 times younger than C++, such as Perl, Python, and Ruby, have so much more libraries, both standard and non-standard, but widely used?
The best uses of C++ involve deliberate design. You design classes to represent the notions of your application, you organize those classes into hierarchies, you express your algorithms precisely and abstractly (no, that “and” is not a mistake), you use libraries, you build libraries, you devise error handling and resource management strategies and express them in code. The result can be beautiful, efficient, maintainable, etc. However, it’s not just sitting down and writing a series of statements directly manipulating characters, integers, and floating point numbers.
The thing is that actually doing something useful involves a whole lot of "direct manipulations" of characters, integers and floating point numbers – and strings, arrays, hash tables, files, sockets, windows, matrices, etc. Languages which let you "just sit down and write the series of statements" give programmers the extra productivity which results in all those extra libraries getting written.
However, equally undeniably it does cost you runtime efficiency, because you pay an overhead for built-in resource management strategies such as garbage collection, built-in error detection strategies such bounds checking, and a whole lot of other things.
It's not surprising that Stroustrup sees the problem in the fact that corporations "with the resources" invest them in what he thinks is the wrong thing, presumably because of their self-interested profit motives. Alex Stepanov who designed the STL expressed similar statements, and so did Alan Kay and every other perfectionist technologist. If you seek perfection to the point of denying the existence of most obvious tradeoffs – and tradeoffs are a pesky thing for a perfectionist because they imply that perfection is unattainable – then you're also likely to somewhat resent corporations, markets, etc. For a discussion of that, see my take on Worse Is Better vs The Right Thing.
(Of course there are plenty of perfectionists who, instead of rationalizing C++'s productivity problems, spend their time denying that Python is slow, or keep waiting for Python to become fast. It will not become fast. Also, all its combinations with C/C++ designed to remedy this inefficiency will forever be ugly. We had psyco, PyPy, pyrex, Cython, Unladen Swallow, CPython extension modules, Boost.Python, and who knows what else. Python is not designed to be efficient; it's designed for productivity and for extensibility through a necessarily ugly C FFI. The tradeoff is fundamental. Python is slow forever. Python bindings are ugly forever.)
So if the tradeoff is fundamental, should we give up on efficient resource utilization? No – if the elegant thing is to load the database table into RAM, it doesn't mean that we have enough RAM. Should we give up on programmer productivity? No – inline assembly or lock-free code which isn't obviously bug-free doesn't belong in our cold paths.
We should, however, give up on perfection. Some code will be slower than we want because we don't have time to optimize it, and some code will be uglier than we want because we have no choice but to optimize it.
A hope to defeat a fundamental tradeoff is nothing but a source of frustration, and it's a bliss to have lost such a hope.
November 17th, 2012 — wetware
An economic value is the worth of a good or service as determined by the market.
You keep using that word. I do not think it means what you think it means.
– Inigo Montoya
If people pay you $1, then the economic value of your good or service has been determined by the market to be $1.
"Creating value" is thus a euphemism for "getting people to pay you money" – which has nothing to do with the usual meaning of "value".
Why is "value" an irksome euphemism? Because heroin dealers "create value", as determined by the market.
In the context of my own profession, all of the following are examples of value creation:
- An office suite using undocumented and constantly changing formats. Value to users having no other way to access their own and others' documents: $100-$500, depending.
- A distribution channel allowing developers to deploy software on popular devices, for which no legal alternative exists. Value to developers: 30% of revenue.
- A social network with a billion private and corporate users who signed up for free, with a new charge to reach a given share of one's audience. Value created: $200 per post.
The more money I've extracted from you, the more value I've created, haven't I?
I'm not picking on Microsoft, Apple or Facebook. I can imagine working for any of them. My conscience is as flexible as the next guy's.
(A particularly inflexible conscience is a horrible condition. Feet to which no mass-produced shoes fit are merely inconvenient. A conscience incompatible with mass-produced social arrangements is a huge burden – not just on its owner, but on his friends and family.)
All I'm saying is that goods and services are distinct from bads and disservices, though both "create value".
Moreover, some sort of disservice tends to be essential to "value creation", a.k.a the extraction of money. People are attached to their money, and will only part with it when given little choice. Microsoft, Apple and Facebook constantly hone their methods of limiting users' choice. Who doesn't?
Business is what it is. It's not that consumers (us) are any better than producers (us). Nor is it impossible for something "free" – as in speech, beer, rider, whatever – to be a disservice to its users.
I just don't think "value" is the right word.
October 6th, 2012 — wetware
"Do You Really Want to be Doing This When You're 50?"
Well, I didn't really want to be doing this when I was 20. I'm in it for the money. As long as there's money in programming, I'll stay for the money, in all likelihood.
What else do you want to be doing when you're 50? Give me a profession remotely close to programming in the following ways:
- Little or no required education
- Good compensation, even for mediocre performers
- Millions of jobs
- No physical effort
- No health or legal risks
Programming is money for nothing. Programming is very easy to enter and extremely hard to quit. What would you do instead?
I work with three lawyers – two became programmers and one became a PM. I haven't met programmers who became lawyers. I do know an engineer – not a programmer – who became a patent attorney (reported reason: "at some point, you resent your manager being the age of your kids"). Would you like to become a patent attorney when you're 50?
I had a manager who decided he'd rather be a school teacher, thinking that this line of work is more beneficial to society. He quit after 8 months, saying in his parting interview to a mainstream newspaper: "Sometimes I just want to enter the classroom with a machine gun and open fire". He's with Samsung now; he feels that his contribution to smartphone imagers benefits society substantially enough.
One of my roommates at work has been studying a bunch of things for a while now. He's got a degree in psychology and in something called Visual Theater. He's been programming part-time all the while, which is how he financed his studies. He's programming as a part of his visual performances (there's computer music involved). He'll likely be programming to finance his art work. I'm not sure he plans to quit programming at any defined point.
I've seen a lot of people "quitting" to study anything from physics to philosophy, and then going back to programming. The money is addictive. There are many other sources of satisfaction, of course – which is why I run this blog for free – but much of this satisfaction has to do with demand, directly or indirectly, and is thus very much related to money. "Building something useful" and "making money" are close relatives.
You could, of course, become independently wealthy. But you probably won't, and then programming is your plan B. There's also a thing about material wealth – it's easily taken away. I'm from Soviet Russia, so I tend to exaggerate the likelihood of that – but really, property is easily confiscated, and paper money can become paper overnight. It's not just a USSR thing; the US confiscated gold from its citizens at about the same time as the USSR. Professional ability, however, can't be confiscated. The prudent (paranoid?) independently wealthy programmer will thus make some effort to stay in a good shape.
There's the argument that professional programming is stressful. Again – compared to what? A doctor's work? A lawyer's work? Answering calls by irate customers while your responses are recorded for later inspection?
What stress? Programmers who can program at all – as in, print out a binary tree correctly – are very scarce. This scarcity makes it rather hard to push programmers around. You can try to bully them into doing unpaid overtime, but they quickly learn that it's a seller's market, and that you're basically bluffing. You have nobody to replace them with.
With demand outstripping supply, there's enough space in programming for everyone. This makes for a not-so-competitive environment, compared to, say, finance/investment banking type of jobs. Programmers are also typically shielded from customers and senior management – the kind of people who're always right, a trait making communication somewhat tiresome.
Deadlines? Sure, we have them, just like everybody else. Let's admit it though – we tend to miss them, and it's not very stressful to us unless we want it to be. If you're given an impossible schedule, and you do your best, and you miss the deadline, you can suffer deeply or you can maintain mental peace. The fact is that your material well-being is rarely in jeopardy because of a missed deadline, so your reaction is fully up to you.
There's the argument that programmers can't fully understand what's going on, what with all the APIs and layers and stuff. And if you don't understand your own environment, that's stressful and that's not fun. Fair enough; but again – who does understand his environment more than a programmer? A doctor digging into a patient's guts? A lawyer sifting through legal documents? An investor trading financial derivatives? A manager overseeing 10 or 20 programmers? With all the self-inflicted complexity, we're still in a better shape than most.
The fact is that there are relatively few programmers in their fifties around. Does it mean people don't survive in programming though? More likely, it is simply a result of growth. There were few 20 year old programmers 30 years ago – compared to 10 years ago. Therefore, there are fewer 50 year old programmers today than 30 year old programmers. To the extent that the growth in programming slows down, things will be different 20 years down the road.
So I'm not planning to quit programming, not because it's such a great source of joy by itself, but because it looks so good compared to just about anything else. Maybe not the most "passionate" statement – but passion burns out, whereas greed is sustainable. And if you plan to quit programming, I wonder what your alternative is, and I won't be surprised if you come back to programming in a few years.
August 11th, 2012 — wetware
I thought about this one for a couple of years, then wrote it up, and left it untouched for another couple of years.
What prompted me to publish it now – at least the first, relatively finished part – is Steve Yegge's post, an analogy between the "liberals vs conservatives" debate in politics and some dichotomies in the professional worldviews of software developers. The core of his analogy is risk aversion: conservatives are more risk averse than liberals, both in politics and in software.
I want to draw a similar type of analogy, but from a somewhat different angle. My angle is, in politics, one thing that people view rather differently is the role of markets and competition. Some view them as mostly good and others as mostly evil. This is loosely aligned with the "right" and the "left" (with the caveat that the political right and left are very overloaded terms).
So what does this have to do with software? I will try to show that the disagreement about markets is at the core of the conflict presented in the classic essay, The Rise of Worse is Better. The essay presents two opposing design styles: Worse Is Better and The Right Thing.
I'll claim that the view of economic evolution is what underlies the Worse Is Better vs The Right Thing opposition – and not the trade-off between design simplicity and other considerations as the essay states.
So the essay says one thing, and I'll show you it really says something else. Seriously, I will.
And then I'll tell you why it's important to me, and why – in Yegge's words – "this conceptual framework became one of the most important tools in my toolkit" (though of course each of us is talking about his own analogy).
Specifically, I came to think that you can be for evolution or against it, and I'm naturally inclined to be against it, and once I got that, I've been trying hard to not overdo it.
Much of the work on technology is done in a market context. I mean "market" in a relatively broad sense – not just proprietary for-profit developments, but situations of competition. Programs compete for users, specs compete for implementers, etc.
Markets and competition have a way to evoke strong and polar opinions in people. The technology market and technical people are no exception, including the most famous and highly regarded people. Here's what Linus Torvalds has to say about competition:
Don't underestimate the power of survival of the fittest. And don't ever make the mistake that you can design something better than what you get from ruthless massively parallel trial-and-error with a feedback cycle. That's giving your intelligence much too much credit.
And here's what Alan Kay has to say:
…if there’s a big idea and you have deadlines and you have expedience and you have competitors, very likely what you’ll do is take a low-pass filter on that idea and implement one part of it and miss what has to be done next. This happens over and over again.
Linus Torvalds thus views competition as a source of progress more important than anyone's ability to come up with bright ideas. Alan Kay, on the contrary, perceives market constraints as a stumbling block insurmountable for the brightest idea.
(The fact that Linux is vastly more successful than Smalltalk in "the market", whatever market one considers, is thus fully aligned with the creators' values.)
Incidentally, Linux was derived from Unix, and Smalltalk was greatly influenced by Lisp. At one point, Lisp and Unix – the cultures and the actual software – clashed in a battle for survival. The battle apparently followed a somewhat one-sided, Bambi meets Godzilla scenario: cheap Unix boxes quickly replaced sophisticated Lisp-based workstations, which became collectible items.
The aftermath is bitterly documented in The UNIX-HATERS Handbook, groundbreaking in its invention of satirical technical writing as a genre. The book's take on the role of evolution under market constraints is similar to Alan Kay's and the opposite of Linus Torvalds':
Literature avers that Unix succeeded because of its technical superiority. This is not true. Unix was evolutionarily superior to its competitors, but not technically superior. Unix became a commercial success because it was a virus. Its sole evolutionary advantage was its small size, simple design, and resulting portability.
The "Unix Haters" see evolutionary superiority as very different from technical superiority – and unlikely to coincide with it. The authors' disdain for the products of evolution isn't limited to development driven by economic factors, but extends to natural selection:
Once the human genome is fully mapped, we may discover that only a few percent of it actually describes functioning humans; the rest describes orangutans, new mutants, televangelists, and used computer sellers.
Contrast that to Linus' admiration of the human genome:
we humans have never been able to replicate something more complicated than what we ourselves are, yet natural selection did it without even thinking.
The UNIX-HATERS Handbook presents in an appendix Richard P. Gabriel's famous essay, The Rise of Worse Is Better. The essay presents what it calls two opposing software philosophies. It gives them names – The Right Thing for the philosophy underlying Lisp, and Worse Is Better for the one behind Unix – names I believe to be perfectly fitting.
The essay also attempts to capture the key characteristics of these philosophies – but in my opinion, it focuses on non-inherent embodiments of these philosophies rather than their core. The essay claims it's about the degree of importance that different designers assign to simplicity. I claim that it's ultimately not about simplicity at all.
I thus claim that the essay discusses real things and gives them the right names, but the wrong definitions – a claim somewhat hard to defend. Here's my attempt to defend it.
Worse is Better – because it's simpler?
Richard Gabriel defines "Worse Is Better" as a design style focused on simplicity, at the expense of completeness, consistency and even correctness. "The Right Thing" is outlined as the exact opposite: completeness, consistency and correctness all trump simplicity.
First, "is it real"? Does a conflict between two philosophies really exist – and not just a conflict between Lisp and Unix? I think it does exist – that's why the essay strikes a chord with people who don't care much about Lisp or Unix. For example, Jeff Atwood
…was blown away by The Rise of "Worse is Better", because it touches on a theme I've noticed emerging in my blog entries: rejection of complexity, even when complexity is the more theoretically correct approach.
This comment acknowledges the conflict is real outside the original context. It also defines it as a conflict between simplicity and complexity, similarly to the essay's definition – and contrary to my claim that "it's not about simplicity".
But then examples are given, examples of "winners" at the Worse Is Better side – and suddenly x86 shows up:
The x86 architecture that you're probably reading this webpage on is widely regarded as total piece of crap. And it is. But it's a piece of crap honed to an incredibly sharp edge.
x86 implementations starting with the out-of-order implementations from the 90s are indeed "honed to an incredibly sharp edge". But x86 is never criticized because of its simplicity – quite the contrary, it's criticized precisely because an efficient implementation can not be simple. This is why the multi-billion-dollar "honing" is necessary in the first place.
Is x86 an example of simplicity? No.
Is it a winner at the Worse is Better side? A winner – definitely. At the "Worse is Better" side – yes, I think I can show that.
But not if Worse Is Better is understood as "simplicity trumps everything", as the original essay frames it.
Worse is Better – because it's more compatible?
Unlike Unix and C, the original examples of "Worse Is Better", x86 is not easy to implement efficiently – it is its competitors, RISC and VLIW, that are easy to implement efficiently.
But despite that, we feel that x86 is "just like Unix". Not because it's simple, but because it's the winner despite being the worse competitor. Because the cleaner RISC and VLIW ought to be The Right Thing in this one.
And because x86 is winning by betting on evolutionary pressures.
Bob Colwell, Pentium's chief architect, was a design engineer at Multiflow – an early VLIW company which was failing, prompting him to join Intel to create their out-of-order x86 implementation, P6. In The Pentium Chronicles, he gives simplicity two thumbs up, acknowledges complexity as a disadvantage of x86 – and then explains why he bet on it anyway:
Throughout the 1980s, the RISC/CISC debate was boiling. RISC's general premise was that computer instruction sets … had become increasingly complicated and counterproductively large and arcane. In engineering, all other things being equal, simpler is always better, and sometimes much better.
…Some of my engineering friends thought I was either masochistic or irrational. Having just swum ashore from the sinking Multiflow ship, I immediately signed on to a "doomed" x86 design project. In their eyes, no matter how clever my design team was, we were inevitably going to be swept aside by superior technology. But … we could, in fact, import nearly all of RISC's technical advantages to a CISC design. The rest we could overcome with extra engineering, a somewhat larger die size, and the sheer economics of large product shipment volume. Although larger die sizes … imply higher production cost and higher power dissipation, in the early 1990s … easy cooling solutions were adequate. And although production costs were a factor of die size, they were much, much more dependent on volume being shipped, and in that arena, CISCs had an enormous advantage over their RISC challengers.
…because of having more users ready to buy them to run their existing software faster.
x86 is worse - as it's quite clear now when, in cell phones and tablets, easy cooling solutions are not adequate, and the RISC processor ARM wins big. But in the 1990s, because of compatibility issues, x86 was better.
Worse is Better, even if it isn't simpler – when The Right Thing is right technically, but not economically.
Worse is Better – because it's quicker?
Interestingly, Jamie Zawinski, who first spread the Worse is Better essay, followed a path somewhat similar to Colwell's. He "swum ashore" from Richard Gabriel's Lucid Inc., where he worked on what would become XEmacs, to join Netscape (named Mosiac at the time) and develop their very successful web browser. Here's what he said about the situation at Mosaic:
We were so focused on deadline it was like religion. We were shipping a finished product in six months or we were going to die trying. …we looked around the rest of the world and decided, if we're not done in six months, someone's going to beat us to it so we're going to be done in six months.
They didn't have to bootstrap the program on a small machine as in the Unix case. They didn't have to be compatible with an all-too-complicated previous version as in the x86 case. But they had to do it fast.
Yet another kind of economic constraint meaning that something else has to give. "We stripped features, definitely". And the resulting code was, according to jwz – not simple, but, plainly, not very good:
It's not so much that I was proud of the code; just that it was done. In a lot of ways the code wasn't very good because it was done very fast. But it got the job done. We shipped – that was the bottom line.
Worse code is Better than not shipping on time – Worse is Better in its plainest form. And nothing about simplicity.
Here's what jwz says about the Worse is Better essay – and, like Jeff Atwood, he gives a summary that doesn't summarize the actual text – but summarizes "what he feels it should have been":
…you should read it. It explains why mediocrity has better survival characteristics than perfection…
The essay doesn't explain that – the essay's text explains why simple-but-wrong has better survival characteristics than right-but-complex.
But as evidenced by jwz's and Atwood's comments, people want it to explain something else – something about perfection (The Right Thing) versus less than perfection (Worse is Better).
Worse is Better evolutionary
And it seems that invariably, what forces you to be less than perfection, what elects worse-than-perfect solutions, what "thinks" they're better, is economic, evolutionary constraints.
Economic constraints is what may happen to select for simplicity (Unix), compatibility (x86), development speed (Netscape) – or any other quality that might result in an otherwise worse product.
Just like Alan Kay said – but contrary to the belief of Linus Torvalds, the belief that ultimately, the result of evolution is actually better than anything that could have been achieved through design without the feedback of evolutionary pressure.
From this viewpoint, Worse Is Better ends up actually better than any real alternative – whereas from Alan Kay's viewpoint, Worse Is Better is actually worse than what's achievable.
(A bit convoluted, not? In fact, Richard Gabriel wrote several follow-ups, not being able to decide if Worse Is Better was actually better, or actually worse. I'm not trying to help decide that – just to show what makes one think it's actually better or worse.)
That's the first part – I hope to have shown that your view of evolution has a great effect on your design style.
If evolution is in the center of your worldview, if you think about viability as more important than perfection in any area, then you'll tend to design in a Worse Is Better style.
If you think of evolutionary pressure as an obstacle, an ultimately unimportant, harmful distraction on the road to perfection, then you'll prefer designs in The Right Thing style.
But why do people have a different view of evolution in the first place? Is there some more basic assumption underlying this difference? I think I have more to say about this, though it's not in nearly as finished form as the first part, and I might write about it all in the future.
Meanwhile, I want to conclude this first part with some thoughts on why it all matters personally to me.
I'm a perfectionist, by nature, and compromise is hard for me. Like many developers good enough to be able to implement much of their own ambitious ideas, I turned my professional life into a struggle for perfection. I wasn't completely devoid of common sense, but I did things that make me shiver today.
I wrote heuristic C++ parsers. I did 96 bit integer arithmetic in assembly. I implemented some perverted form of thread migration on the bare metal, without any kind of OS or thread support. I did many other things that I'm too ashamed to admit.
None of it was really needed, not if you asked me today. It was "needed" in the sense of being a step towards a too-good-for-my-own-good, "perfect" solution. Today I'd realize that this type of perfection is not viable anyway (in fact, none of these monstrosities survived in the long run.) I'd choose a completely different path that wouldn't require any such complications in the first place.
But my stuff shipped. I was able to make it work.You don't learn until you fail – at least I didn't. Perfectionists are stubborn.
Then at one point I failed. I had to throw out months worth of code, having realized that it's not going to fly.
And it so happened that I was reading Unix-Haters, and I was loving it, because I'm precisely the type of perfectionist that these people are, or close enough to identify with them. And there was this essay there about Worse Is Better vs The Right Thing.
And I was reading it when I wrote the code soon to be thrown out, and I was reading it when I decided to throw it out and afterwards.
And I suddenly started thinking, "This is not going to work, this type of thing. With this attitude, if you want it all, consistency, completeness, correctness – you'll get nothing, because you will fail, completely. You're too dumb, I mean I am, also not enough time. You have to choose, you're not going to get it all so you better decide what you want the most and aim at that."
If you read the Unix-Haters, you'll notice a lot of moral outrage – perfectionists have that, moral outrage at something imperfect. Especially at someone who knowingly chooses to aim at less than perfection. Especially if it's due to the ulterior motive of wanting to succeed.
And I felt a counter-outrage, for the first time. "What do you got to show, you got nothing. What good are your ideals if you end up dead? Dead bodies smell bad to us for a reason. Technical superiority without evolutionary superiority? Evolutionary inferiority means "dead". How can "dead" be technically superior? What have the dead ever done for us?"
It was huge, for me. I mean, it took a few years to truly sink in, but that was the start. I've never done anything Right since. And I've been professionally happy ever after. I guess it's a kind of "having swum ashore".
June 9th, 2012 — wetware
"Work on important problems": ~40900 results.
"Work on unimportant problems": ~18 results.
– Google (at the time of writing), tempting the contrarian in me
It seems obvious that some problems are important to solve and some aren't, as in, curing cancer is more important than delivering social gaming. Often, people lament the abundance of tech firms working on ultimately unimportant stuff, and advise to work on important problems and not just chase the money.
I guess I agree that some problems are ultimately more important than others. But I don't think it follows that working on the important ones is better.
Working on unimportant problems can create important side-effects. A whole lot of mission-critical, world-changing and even life-saving tech is a by-product of "unimportant" things – time-wasting infotainment products, or personal pet projects started without a grand noble cause.
For instance, GPU hardware was developed to run first-person shooters with increasingly fancier graphics. Today, it powers some of the largest high-performance computing clusters where "important" science is done.
Other types of processors powering HPC clusters weren't designed for HPC, either. Hardware originally designed for scientific computing is dead – Cray is the iconic example – and replaced by cheaper and more powerful microprocessors designed to run things like office software. Office software arguably solves no important problem: as Berglas convincingly argues, office automation results not in increased productivity, but in increased complexity of rules and regulations.
All popular programming languages and operating systems, without a single exception I can think of, began either as personal projects or commercial projects not aiming to solve any problem "important" by itself. People hacked on the stuff for pleasure (C, Unix, Linux, Python, Ruby, PHP), or to conquer the world of businessy/officy/enterprisey software (Windows, VB, Java, C#, ASP). One language more specifically designed for the implementation of important software is Ada – but most important programs are written in something else.
And, certainly, it's the "unimportant" social companies that made publishing and coordination via Internet universally accessible. Myself, I'm not very fond of either Facebook, Twitter, etc. or the kind of political activity that's coordinated through these sites nowadays, but it's "important", without doubt – another important side-effect of unimportant time-wasting projects.
One might wonder how anything of importance can possibly come out of, say, FarmVille. I really don't know – however, I couldn't guess how anything of importance could come out of DOOM, and it did.
And then there's a reason why so much of the best tech comes out of the least "important" markets. These markets are big, and they're free. Important problems tend to imply a smallish scale, or heavy regulation, or both. So you can't finance the work, and/or can't get any work done anyway.
Consider the aerospace software market – there aren't many planes, but a whole lot of regulation. Philip Greenspun, a software entrepreneur, a flight instructor and an expert witness in both software-related and aviation-related lawsuits, had this to say about the Colgan 3407 disaster:
Who crashed Colgan 3407? Actually the autopilot did. … The airplane had all of the information necessary to prevent this crash. The airspeed was available in digital form. The power setting was available in digital form. The status of the landing gear was available in digital form. …
How come the autopilot software on this $27 million airplane wasn’t smart enough to fly basically sensible attitudes and airspeeds? Partly because FAA certification requirements make it prohibitively expensive to develop software or electronics that go into certified aircraft. It can literally cost $1 million to make a minor change. Sometimes the government protecting us from small risks exposes us to much bigger ones.
The same is happening in the automotive market, the healthcare market, etc. There's progress, of course, just nowhere near the progress in more frivolous areas – and much of the progress in "important" areas is a byproduct of progress in frivolous areas. As in, the best system for managing patients' records may well be Google Docs that doctors access from their iPads.
By the way, the importance of an issue correlates with the stupidity of rules, not just in technology, but in most things in life. The hoops you must jump through to get an "important" product out the door are not fundamentally different from airport security checks.
The airport security theater results in little added security. Likewise, the quality theater necessarily surrounding any life-saving technology results in little added quality. However, for much the same reasons, both are unavoidable. I've been working on automotive accident prevention systems for the last decade, and as time goes by, the regularly scheduled cavity searches are only getting worse.
So if you ask me – by all means, work on unimportant problems. They're often more fun to work on, and ultimately you never know how important they really are.
April 23rd, 2012 — wetware
Personally, I love email:
- It's still the best way to talk online, overall – the most open format, the best client programs.
- Online beats offline since everything is archived and searchable.
- Written beats spoken since you have time to think stuff through, and you can attach images, spreadsheets, code, etc.
However, I noticed that email discussions bring the worst out of people, whereas walking over to them and talking brings the best out of them. I guess it's because emails feel impersonal, leading to "email rage" much like feeling isolated inside a car leads to "road rage".
On top of that, for many people email is their todo list, there still really being no better alternative for keeping a todo list. What this means though is that sending an email with a suggestion implying work on their part without prior face-to-face discussion looks like a written order to do something. I believe this impression can't be avoided even with the most polite, "pretty please"-infested wording. It still feels like "you didn't even bother to talk to me and you expect me to do things!"
So I decided, roughly, to never open any discussion over email. It's fine for followups and bug reports, and it's fine if it's known to work for the people involved. But my default assumption is that email is an evil thing capable of creating tensions and conflicts out of nowhere. Much better to call the person, check that they're available to talk and go talk to them. Then, maybe, send them the summary over email to get all that archiving and searching goodness without the evil price.
January 23rd, 2012 — software, wetware
Today I learned about HyperCard, a system where you could implement a basic calculator in a few easy steps, one of them involving the following impressively English-like snippet:
get name of me
put the value of the last word of it after card field "lcd"
The article depicts HyperCard as a system making programming accessible to people who aren't professional developers. It is claimed that Apple likely killed off the product because it's inconsistent with its business model (roughly, devices bought to consume rather than to create).
I sympathize with the sentiment – I very much like stuff you can tinker with, and dislike business models discouraging tinkering. However, I don't think businesses have the power to prevent anything that works well for many people from happening. A conspiracy of typewriter manufacturers could never stop the PC.
This seems especially true with software, where huge systems can be built by volunteers in their spare time. If an idea works, if a software system wants to be built around it, it will be built.
Of course it may be the case that the time hasn't come for a programming system for non-developers. It's just my opinion that it never will come, not really. Why?
Not because you need much education to program. Very useful stuff can be built without knowing why optimal sorting is O(n*log(n)), or even what big O means.
Not because programming languages must have, or typically have arcane syntax. As a kid, I found Pascal's somewhat English-like "begin" and "end" off-putting, and was greatly relieved to discover Algolish braces. How close to natural language syntax can get, and whether it is at all beneficial to go there is IMO an irrelevant question. The fact is that programming languages can be very readable to people.
The main reason is that development leads to maintenance, and maintenance leads to suffering.
For example, if your program stores persistent data, and you want to change it, your changes to the program must be done such as to preserve the meaning of existing data. This part of development causes major pain everywhere, from video recording to financial databases to compiler construction. No amount of knowledge and no amount of support from the tools make this fun.
There are many other things. Everything in your program's environment is unstable and you must constantly update the program to keep up. Your program gets cluttered with options and you forget what does what. There are cases you didn't test – spaces in the names, empty data fields, reverse order of operations.
As a result, maintenance means dealing with misbehaving programs that eat data, send trash around, or simply make you wait for an hour and then watch them produce garbage.
This never ends and quickly stops being fun. When something useful can not be done quickly and isn't the average person's idea of fun, it becomes the business of professionals – or hardcore hobbyists indistinguishable from professionals. As a counter-example, many people like cooking in their spare time without necessarily getting close to the level of a chef or spending that much time cooking. Con Kolivas, on the other hand, could technically be called a "hobbyist", but he could be called a "professional" as well.
Maybe I'm wrong, maybe there are plenty of places where a sprinkle of logic – in textual form or graphical form or whatever form – can be figured out quickly, left alone and be useful ever after. It's just that it's usually the opposite with me. Every time I have a nice little idea it takes me 10x the time it "should" take to implement, and most things keep biting me once in a while for a long time.
Programming isn't for everyone because it is not fun to maintain what was fun to program.