Entries Tagged 'wetware' ↓

Why programming isn't for everyone

Today I learned about HyperCard, a system where you could implement a basic calculator in a few easy steps, one of them involving the following impressively English-like snippet:

on mouseUp
  get name of me
  put the value of the last word of it after card field "lcd"
end mouseUp

The article depicts HyperCard as a system making programming accessible to people who aren't professional developers. It is claimed that Apple likely killed off the product because it's inconsistent with its business model (roughly, devices bought to consume rather than to create).

I sympathize with the sentiment – I very much like stuff you can tinker with, and dislike business models discouraging tinkering. However, I don't think businesses have the power to prevent anything that works well for many people from happening. A conspiracy of typewriter manufacturers could never stop the PC.

This seems especially true with software, where huge systems can be built by volunteers in their spare time. If an idea works, if a software system wants to be built around it, it will be built.

Of course it may be the case that the time hasn't come for a programming system for non-developers. It's just my opinion that it never will come, not really. Why?

Not because you need much education to program. Very useful stuff can be built without knowing why optimal sorting is O(n*log(n)), or even what big O means.

Not because programming languages must have, or typically have arcane syntax. As a kid, I found Pascal's somewhat English-like "begin" and "end" off-putting, and was greatly relieved to discover Algolish braces. How close to natural language syntax can get, and whether it is at all beneficial to go there is IMO an irrelevant question. The fact is that programming languages can be very readable to people.

The main reason is that development leads to maintenance, and maintenance leads to suffering.

For example, if your program stores persistent data, and you want to change it, your changes to the program must be done such as to preserve the meaning of existing data. This part of development causes major pain everywhere, from video recording to financial databases to compiler construction. No amount of knowledge and no amount of support from the tools make this fun.

There are many other things. Everything in your program's environment is unstable and you must constantly update the program to keep up. Your program gets cluttered with options and you forget what does what. There are cases you didn't test – spaces in the names, empty data fields, reverse order of operations.

As a result, maintenance means dealing with misbehaving programs that eat data, send trash around, or simply make you wait for an hour and then watch them produce garbage.

This never ends and quickly stops being fun. When something useful can not be done quickly and isn't the average person's idea of fun, it becomes the business of professionals – or hardcore hobbyists indistinguishable from professionals. As a counter-example, many people like cooking in their spare time without necessarily getting close to the level of a chef or spending that much time cooking. Con Kolivas, on the other hand, could technically be called a "hobbyist", but he could be called a "professional" as well.

Maybe I'm wrong, maybe there are plenty of places where a sprinkle of logic – in textual form or graphical form or whatever form – can be figured out quickly, left alone and be useful ever after. It's just that it's usually the opposite with me. Every time I have a nice little idea it takes me 10x the time it "should" take to implement, and most things keep biting me once in a while for a long time.

Programming isn't for everyone because it is not fun to maintain what was fun to program.

Compensation, rationality and the project/person fit

To negotiate a compensation, you need to compare to something. There are two principally different things people compare compensation to:

  1. Available alternatives. Employee: "I could get twice as much at Microsoft." Employer: "We can hire Bob for a half of your salary."
  2. Peers' compensation. Employee: "Jeff gets twice as much and he's not better than me." Employer: "John gets half your salary and you're not better than him."

I believe the second approach – comparing "ability" and having a common level of compensation for people "at the same level of ability" – is the worse approach. Its main drawbacks are:

  • People look into each other's pockets too much that way
  • It is, in a basic economical sense, an "irrational" approach
  • It ignores the project/person fit

I'll discuss all of these drawbacks, mostly focusing on ignoring the project/person fit – in my opinion, the worst part.

Looking into each other's pockets

Even if management doesn't disclose the way people are labeled and what compensation corresponds to each label, people have an incentive to find out all about this. This means that everyone will know how much everyone else gets, and how one must be labeled to earn a given amount.

People looking into each other's pockets is bad for everyone:

  • Invariably people will find others' compensation unjust, which doesn't improve team spirit.
  • You get Piterian situations where, say, a strong developer's only way to get a raise is to become a manager, at which he might very well suck, etc.
  • Sometimes the employer does want to set exceptional conditions for someone – pay someone significantly more or less than someone else with the same title. However, if everyone tends to find out about everyone else's compensation, it becomes hard to make these exceptions as it is guaranteed to upset people.

Others' compensation is one of those things that are better left unknown. It's a pity if you tempt people to find it out.

Comparing to imaginary alternatives is "irrational"

If I'm working on X, Jeff works on Y and John works on Z, it makes no sense to compare my compensation to theirs. Whoever is unhappy with the current arrangements and threatens to terminate them – that is, whether I quit or get fired – neither Jeff nor John will replace me, nor will I replace them.

Jeff and John usually have to keep working on Y and Z, so they can't work on X if I quit. Nor will I work on Y and Z – even if I quit, not the company, but just my team, and join their team in the same company. They're already there working on Y and Z – so I won't work on Y or Z, but on W.

Therefore, the employer should compare my compensation to what he'd pay someone else to do X, including the cost of training him. I should compare to what I'd be payed to do W, including the cost of having to learn to do it.

Why should we compare to these things and not others? Because these are our actual alternatives. Jeff's and John's compensation has nothing to do with our actual alternatives.

To which someone can legitimately still reply: why? Someone can say, I still want to compare to Jeff's and John's compensation. So what if you're saying that it's "economically irrational" to consider things unrelated to the real alternatives in a price negotiation? It's my price negotiation, I can compare to whatever I want!

That someone would be right, in a way. It's not like there's a monopoly on the definition of "economic rationality" – one could certainly find an economist claiming that looking at your peers is the rational thing to do, or at least the natural thing to do.

Say, Robert Frank – "Choosing the Right Pond". You know, evolutionary considerations – you're trying to impress a potential mate with your salary, the mate compares within the "pond", an unusually high salary is an externality, etc.

Basically it's partners that you compete for, and it's your peers who you compete against, so it's their compensation that you should care about. (Does this sound just like your workplace? I hope not…)

As an aside, I don't understand evolutionary definitions of "rationality", not really. I mean, if the ultimate goal is to pass your genes, shouldn't you become a serial rapist targeting nuns or someone else who isn't likely to use abortion? If you aren't doing this, and you advocate the evolutionary view of rationality, aren't you proving your own irrationality by your own actions? And if you are irrational, then why should irrational people like you be trusted to define rationality in the first place?

But the fact that I don't like the "evolutionary" view of rationality and prefer, in this context, the "classical economics" definition is just my opinion. An employer can have his own – just like a friend who kept trying to sell his car, for a long time, until he found someone willing to pay the high price.

Another friend said, when they discussed markets, "what you did is irrational – markets don't behave that way – in a market, you lower the price if you don't have a buyer". To which the seller responded – "first, I did sell high eventually; second – you can't tell me how markets behave – I am the market!"

So yeah, if you're an employer or an employee and you want to compare compensations regardless of what alternatives are actually available – you can of course do this. You are the market – economists, bloggers or anyone else can try to describe your behavior and predict its outcomes, but they aren't entitled to label it "rational" or "irrational", not really.

All that can be said is that considering imaginary alternatives instead of the real ones can very well make you face the real ones.

That is, suppose you say to an employee, "John gets half your salary and you're not better than him." Suppose the employee replies, "I could get twice as much at Microsoft." His alternative is real – he quits. Your alternative is not real – John is not available to replace the guy who quit. Now you're facing your real alternatives – which can be much worse than raising the guy's salary would have been.

Isn't it a better idea to consider your real alternatives during the negotiations?

To which one could reply – how bad those alternatives can be, really? I mean, we hired John, right? And he's just as good. So we can always hire this sort of person for this sort of price, right? Yeah, there are the training costs, but that's all there is to it, not?

I believe that there's more to it than training costs. The big thing is the project/person fit.

The project/person fit

It's magical. If a person wants to do something, I'm so much in favor of letting them, whatever other things they'd have to stop doing. I mean, there are things which nobody will ever do except the one person – or maybe one of two or three people – to whom it's important.

Or someone could do it, but not nearly as well. And not because he's "worse" – he may be "better" on all the common benchmarks (IQ, grades, reputation, whatever). He's not "worse" in any quantifiable way, but it just doesn't click – the project is not a good fit for him.

It's a depressing thought for a manager – a part of a manager's helplessness. A manager can't do anything himself – the most helpless creature around. He's always responsible for what other people do. He can pick the people, talk to people, negotiate with people, reshuffle people. But that is all he can do – and not a single bit of real work that must be done to make his project succeed.

This means an extreme dependence on other people, which is stressful. The project/person fit makes this much worse. You're basically constrained to not move people away from projects when there's this magical click. They're irreplaceable, so you depend on them tremendously – not very comforting. So it's natural to argue that this magic business doesn't really exist – everyone is replaceable.

Now, I'm not saying that people actually "can't be replaced" – far from it. That thought would make me lose sleep as a team leader – and it would offend me as a programmer.

I mean, if our processors are "universal computing machines", then surely programmers ought to be universal as well, right? I much prefer to think of myself a "replaceable cog" – but a universal cog – than an irreplaceable part of the peculiar machinery of my current workplace, obviously useless outside it because of my extreme specialization.

So actually I'm at the other extreme on this one, most likely – I don't think very much of "relevant experience", and I'll be the first to say that a person new to something will cope with it very well, don't worry. Everyone is replaceable, because everyone can deal with everything.

For instance, in our recent round of work on hardware verification, we had a tough deadline, so there was a single hardware module that 5 programmers worked on. Of them, 3 had no experience in hardware verification at all, so they had to learn about hardware simulators and waveform viewers and stuff.

Normally, just one person would do that work, but it'd take longer and we couldn't afford the latency. We also had to swap people in and out to do other things, and they had to continue where the previous person left. And it worked, basically. So I think I'm very much at the other extreme – programmers are universal, and they'll deal.

What do I mean by this "project/person fit" then?

What I mean is that there's still a 10x productivity difference between a person struggling with this important stuff that you dumped on them but they kinda don't understand or care about very much, and a person who wants the thing done.

Actually it's more than 10x – you can't quantify it, it's qualitatively different. A bird doesn't just move faster than a snail. You can't express the difference between crawling and flying in a single number, even if your HR policy mandates this sort of quantification.

People have their own priorities

A manager classifies things as important and unimportant, and he might be tempted to think that somebody gives a damn about his view of these matters.

But they don't give a damn. They classify things as "stuff the manager wants" and "stuff that they want". Stuff that's only important to them because you said so crawls. Stuff that they feel is important and interesting flies.

Managers might think that work gets done because they want it done. It's true – but the best work gets done because people who do it want it done.

And people are amazing in the diversity of their tastes. Taste depends on many things – personal talents and interests, personal history that makes some problems closer to your heart than others, and so on – but no matter what the reasons are, the result is that tastes are wildly different.

Consider the following things, all among the stuff our team works on:

  • A distributed build & run server.
  • A debugger agent – porting gdb to custom hardware and OS.
  • A graphical editor for tagging objects in video clips.
  • A static memory manager built around C language extensions and a constraint solver.

I think all of them are important, and all of them are interesting. As a programmer, I'd work on any of them. I mean, does any of this sound like boring grunt work? Certainly they're all nicer than verifying a hardware module that you didn't specify under time pressure, at least if you ask me.

However, in my team, there's just one, two, sometimes ~1.5 people who actually want to work on each of these things. Moreover, most of them have an aversion to most other things on the list.

Now, if it was strictly necessary, any of them would work on any of these projects. And they'd do a good job even if they got the one they hated the most. But it'd be uninspired, and nobody could blame them.

How easy is it to find someone who'd love to do a project? I'll tell you – real damn hard. I mean, I'm a language geek; in my opinion, everyone wants to work on programming language extensions. And you know what? They don't. Not really. Most don't want at all. Then many like the idea, in principle. But not that kind of language, or not that kind of extensions. There's no spark in their eyes – until the right person shows up.

Similarly with the other things. You'd think that a person who likes the debugger agent would also like the distributed build server, not? I'd expect that, definitely – but she doesn't. And you can't make someone like something. Usually you can't even pay them to like it. They just won't.

Some projects are optional. With these, I will wait for years until the right person shows up. I feel guilty – people are asking for it, it'd be great if we had it, it would become an enabler for things now
unthinkable. But who am I kidding? Nobody wants to do this now, not really. Better wait until he shows up.

When he shows up, what do I say? I say, keep him. Really. Don't let the thing turn into a wasteland just because programmers (actually) are universal, replaceable cogs!

Some projects are not optional – you must do them no matter what. When there's no right person to do such a thing – watch years of work, tears and sweat produce a mountain of code dripping with hate and depression. I'm serious – sometimes I can actually look at code and see how nobody ever loved it.

I've seen brilliant people produce disgusting code nobody wants to touch. Certainly I couldn't help it myself – my sense of duty did not help. I did it on time, it worked, and it was a toxic waste.

It doesn't help that the manager thinks it's important. It doesn't help that I agree with him. If I don't like it, I won't do it very well.

Sometimes – many times – the right person arrives years after the wrong people – the wrong people for this project – have been spitting blood trying to make it work. It takes a few months and the scenery is transformed. Mountains of hate are gone. You have a working system. People who lost hope for this particular area to ever become habitable, to stop smelling of fail, suddenly smile.

Would you let that person go, just because John is "just as good" and you pay him less? There is no way that John is going to take over this thing. Even if he's available. He isn't interested. He couldn't care less. He could take over just like anyone else, but it'd be toxic waste all over again. Come on!

Sometimes a programmer will be moved away from a project – or not be allowed to do it – because of his already high compensation. "We can find someone cheaper to do this". Yes – but not someone who wants to do this! This just brings tears to my eyes.

But if he loves the project, he won't quit, right?

Good thinking. People who can be replaced with someone like John should therefore be compared to John. People who can't be replaced with someone like John can still be compared to him – they're the ones who love their work, so they likely won't quit, and then we can sensibly compare everyone to everyone in a reasonable manner.

They'll quit.

They'll quit even if it's "irrational" for them. People can quit a project they love over compensation, and then spend years until they find something nice to work on. Often they feel it wasn't worth it, or at least are unhappy with their working situation.

But it doesn't help that you were right and that they should have stayed, settling for the fair compensation level of John and working on their favorite stuff. It doesn't help because the loss is yours as well.

Why do people behave in this "irrational" way, apart from having too high expectations about their alternatives? The economist David Friedman gives an evolutionary explanation, if you like that sort of thing:

…human beings regard the usual terms of exchange as right and any deviation from those terms that makes them worse off as a presumptively wicked act by the other party. This feature resulted in human beings that possessed it getting better terms in bilateral monopoly bargains in the environment in which we evolved…

"Bilateral monopoly" is basically the situation you and your employee find yourselves in once a project "clicks" with him. It's hard for you to replace him – and it's hard for him to replace you. This may tempt you to lower the price you're willing to pay. The response Mother Nature had equipped us with for these cases is that the employee thinks you're wicked, and he quits.

This reaction is "irrational" – in the sense that he's now worse off. But it's very much "rational", in the sense that the threat of "irrational" quitting should improve his terms – if you know that the threat is real, despite the fact that actually quitting would make him worse off.

Well, in my experience, the threat is very real alright. Worth taking into account.

Why management likes to set standard compensation levels

I suspect the benefit is that it makes decision-making easier on the scale of a large company. It works reasonably well and is very easy to implement. It's a bit like using a simple heuristic in code because it's just 5 lines of code and it sort of works.

"Bounded rationality", if you like (…isn't "bounded rationality" what used to be called "stupidity"? Aren't "the cognitive limitations of the mind" mentioned in the article also called "stupidity"? I'm not mocking stupidity – I'm certainly equipped with a high degree of stupidity myself, and you can trace its influence on my decision-making. I'm just wondering why invent new terms when we already have perfectly good ones.)

Anyway, if you know why standard compensation levels are a good idea – a rational argument for them in an unbounded way – let me know in the comments. Puzzles me plenty.

Cycles, memory, fuel and parking

In high-performance, resource-constrained projects, you're not likely to suddenly run out of cycles – but you're quite likely to suddenly run out of memory. I think it's a bit similar to how it's easy to buy fuel for your car – but sometimes you just can't find a parking spot.

I think the difference comes from pricing. Processor cycles are priced similarly to fuel, whereas memory bytes are priced similarly to parking spots. I think I know the problem but not the solution – and will be glad to hear suggestions for a solution.

Cycles: gradual price adjustment

If you work on a processing-intensive project – rendering, simulation, machine learning – then, roughly, every time someone adds a feature, the program becomes a bit slower. Every slowdown makes the program a bit "worse" – marginally less useable and more annoying.

What this means is that every slowdown is frowned upon, and the slower the program becomes, the more a new slowdown is frowned upon. From "it got slower" to "it's annoyingly slow" to "we don't want to ship it this way" to "listen, we just can't ship it this way" – effectively, a developer slowing things down pays an increasingly high price. Not money, but a real price nonetheless – organizational pressure to optimize is proportionate to the amount of cycles spent.

Therefore, you can't "suddenly" run out of cycles - long before you really can't ship the program, there will be a growing pressure to optimize.

This is a bit similar to fuel prices – we can't "suddenly" run out of fuel. Rather, fuel prices will rise long before there'll actually be no fossil fuels left to dig out of the ground. (I'm not saying prices will rise "early enough to readjust", whatever "enough" means and whatever the options to "readjust" are –  just that prices will rise much earlier in absolute terms, at least 5-10 years earlier).

This also means that there can be no fuel shortages. When prices rise, less is purchased, but there's always (expensive) fuel waiting for those willing to pay the price. Similarly, when cycles become scarce, everyone spends more effort optimizing (pays a higher price), and some features become too costly to add (less is purchased) – but when you really need cycles, you can get them.

Memory: price jumps from zero to infinity

When there's enough memory, the cost of an allocated byte is zero. Nobody notices the memory footprint – roughly, RAM truly is RAM, the cost of memory access is the same no matter where objects are located and how much memory they occupy together. So who cares?

However, there comes a moment where the process won't fit into RAM anymore. If there's no swap space (embedded devices), the cost of allocated byte immediately jumps to infinity – the program won't run. Even if swapping is supported, once your working set doesn't fit into memory, things get very slow. So going past that limit is very costly – whereas getting near it costs nothing.

Since nobody cares about memory before you hit some arbitrary limit, this moment can be very sudden: without warning, suddenly you can't allocate anything.

This is a bit similar to a parking lot, where the first vehicle is as cheap to park as the next and the last – and then you can't park at all. Actually, it's even worse - memory is more similar to an unmarked parking lot, where people park any way they like, leaving much unused space. Then when a new car arrives, it can't be parked unless every other car is moved – but the drivers are not there.

(Actually, an unmarked parking lot is analogous to fragmented memory, and it's solved by heap compaction by introducing a runtime latency. But the biggest problem with real memory is that people allocate many big chunks where few small ones could be used, and probably would be used if memory cost was something above zero. Can you think of a real-world analogy for that?..)

Why not price memory the way we price cycles?

I'd very much like to find a way to price memory – both instructions and data - the way we naturally price cycles. It'd be nice to have organizational pressure mount proportionately to the amount of memory spent.

But I just don't quite see how to do it, except in environments where it happens naturally. For instance, on a server farm, larger memory footprint can mean that you need more servers – pressure naturally mounts to reduce the footprint. Not so on a dedicated PC or an embedded device.

Why isn't parking like fuel, for that matter? Why are there so many places where you'd expect to find huge underground parking lots – everybody wants to park there – but instead find parking shortages? Why doesn't the price of parking spots rise as spots become taken, at least where I live?

Well, basically, fuel is not parking – you can transport fuel but not parking spots, for example, so it's a different kind of competition – and then we treat them differently for whatever social reason. I'm not going to dwell on fuel vs parking – it's my analogy, not my subject. But, just as an example, it's perfectly possible to establish fuel price controls and get fuel shortages, and then fuel becomes like parking, in a bad way. Likewise, you could implement dynamic pricing of parking spots – more easily with today's technology than, say, 50 years ago.

Back to cycles vs memory – you could, in theory, "start worrying" long before you're out of memory, seeing that memory consumption increases. It's just not how worrying works, though. If you have 1G of memory, everybody knows that you can ship the program when it consumes 950M as easily as when it consumes 250M. Developers just shrug and move along. With speed, you genuinely start worrying when it starts dropping, because both you and the users notice the drop – even if the program is "still usable".

It's pretty hard to create "artificial worries". Maybe it's a cultural thing – maybe some organizations more easily believe in goals set by management than others. If a manager says, "reduce memory consumption", do you say "Yes, sir!" – or do you say, "Are you serious? We have 100M available – but features X, Y and Z are not implemented and users want them!"

Do you seriously fight to achieve nominal goals, or do you only care about the ultimate goals of the project? Does management reward you for achieving nominal goals, or does it ultimately only care about real goals?

If the organization believes in nominal goals, then it can actually start optimizing memory consumption long before it runs out of memory – but sincerely believing in nominal goals is dangerous. There's something very healthy in a culture skeptical about anything that sounds good but clearly isn't the most important and urgent thing to do. Without that skepticism, it's easy to get off track.

How would you go about creating a "memory-consumption-aware culture"? I can think of nothing except paying per byte saved - but, while it sounds like a good direction with parking spots, with developers it could become quite a perverse incentive…

Graham & Coase: when big companies are a good idea

Paul Graham was once asked the following RAQ (rarely asked question):

How can I avoid turning into a pointy-haired boss?

His answer:

The pointy-haired boss is a manager who doesn't program. So the surest way to avoid becoming him is to stay a programmer. What tempts programmers to become managers are companies where the only way to advance is to go into management. So avoid such companies and work for (or start) startups.

Why be a manager when you could be a founder or early employee at a startup?

Why?! Oh wow. I could fill a book explaining why. But many of my reasons are my own, and aren't relevant to you unless you're much like me. So I'll focus on the general answer to the question implied by Paul Graham: Why do large firms exist?

The question was addressed by the economist Ronald Coase in his article "The Nature of the Firm". This article, together with his work on externalities (the Coase Theorem), earned him a Nobel Prize in economics. This is one evidence that the question is interesting and far from trivial.

Suppose there are no good answers to Paul Graham's rhetorical question. That is, it's always objectively better to start or join a small firm than to be a manager in a large one. You'll always get more work done, or will be more satisfied, or both. Well, if so, competition should eventually drive large firms out of business. So why are they still around?

For starters, clearly there are problems best solved by small groups of people armed with off-the-shelf tools. For instance, two iconic YC startups funded by Paul Graham, Reddit and Dropbox, each solve a problem with the help of a few programmers and a bunch of commodity servers running a commodity software stack. A larger company could hardly improve on what they do.

Note that off-the-shelf products are key to being small (or at least starting small). Reddit or Dropbox could never build those servers from scratch. A small group of people can not erect a $5G chip fabrication facility. Building and operating a fab – or a search engine – requires lots of custom development, so you need a lot of people.

Or do you?

Of course the total number of people involved has to be very large. But it doesn't follow that they should be organized as big companies. Instead, the work could be done by many small organizations, each contracting out most of the work to others.

You're big because you hire. Why hire if you can buy, contract out – and stay small?

Indeed, this seems to make perfect sense. To quote Wikipedia's summary of The Nature of the Firm (1937):

The traditional economic theory of the time suggested that, because the market is "efficient" (that is, those who are best at providing each good or service most cheaply are already doing so), it should always be cheaper to contract out than to hire.

Then why do most people prefer employment to self-employment, as evidenced by their actions (and an economist never trusts anything but actions as a tool to reveal someone's preferences)? Why do I hate the idea of running a small firm?

Either the "traditional economic theory" is right – one should run a small firm, and I'm a freak of nature destined to extinction due to economic evolutionary pressure, together with much of the population – or the theory is lacking, and there should be a concept formalizing my aversion to self-employment.

And in fact, at this point, Coase introduces the term – transaction costs:

Coase noted, however, that there are a number of transaction costs to using the market; the cost of obtaining a good or service via the market is actually more than just the price of the good.

Oh, yeah – MUCH more if you ask me.

Other costs, including search and information costs, bargaining costs, keeping trade secrets, and policing and enforcement costs, can all potentially add to the cost of procuring something via the market.

YES! Here's a Nobel Prize-winning economist from the notoriously "pro-free market" Chicago school that UNDERSTANDS ME. He knows why I hate markets. ("Pro-market" doesn't mean you love markets, just that you think governments are even worse.)

This suggests that firms will arise when they can arrange to produce what they need internally and somehow avoid these costs.

Avoiding these costs can enable work that just can't happen outside the context of a big company.

For instance, I work on chips for embedded computer vision, at a company that's now fairly large. This is an example where a lot of people need to cooperate in a custom development effort (as opposed to fewer people using off-the-shelf products).

In theory, I could start a computer vision hardware startup instead of it being an internal project. In practice, it wouldn't work, because:

  • I wouldn't know what to build. Hardware accelerates algorithms – what algorithms? I only know because I'm in the same company with developers of very effective unpublished algorithms. Without that knowledge, what could I build – an OpenCV accelerator? Good luck selling that.
  • I couldn't build it nearly as efficiently. A great source of efficiency is fitting hardware to the specific workload. But if we were not a part of the company but a vendor, the company would make sure there are competing vendors to keep prices low. This means that we, no longer having a guaranteed customer, would have to support as many different workloads as possible, to increase the pool of potential customers. As a rule, more generic hardware is less efficient.
  • I couldn't explain how to program it. Once you gave away your programming model to the customer – as you have to if you want them to, well, program you processors – only very strong patents can prevent them from cloning your hardware (possibly with the help of your competitor). A big company that, among other things, designs its own hardware doesn't have to explain it to the outside world. And even if its hardware ends up cloned – it's just one part of the secret knowledge behind the product. But if you're a small company only making hardware and it's cloned, you're busted. You shouldn't even start before making sure your ideas are "sufficiently patentable" – which you don't know before you developed those ideas.

Of course, the number one real reason I couldn't run a hardware startup is that I'm no businessman. But the problems above are also very real, and frequently insurmountable for people who can do business. Not all custom development is impossible to successfully outsource, but much is. The problems result from economic fundamentals.

In econ-speak, such problems are collectively known as "search and information costs, bargaining costs, keeping trade secrets, and policing and enforcement costs". Indeed, all these problems were featured in my example. In plain English, a simple way to sum up all those problems is trust – or more precisely, the lack thereof:

  • A company can't trust a vendor, so a vendor can't know its algorithms.
  • A company can't trust a vendor to keep qaulity high and prices low if it guarantees to remain its customer…
  • …So a vendor can't trust a company to remain its customer, so it can't invest too much in a solution just to that company's specific needs.
  • A vendor can't trust a company to keep buying from it if enough knowledge is given away so that the product can be cloned instead – so some products are not worth building.

When you work for a big company, you deal with coworkers, and you're all playing for the same team. The smaller the company, the more you deal with customers and vendors, which means playing against them. There's no such word as "co-customer" or "co-vendor" for a good reason.

At least that's how things are framed by the rules. The rules say that all employees are agents acting towards a common goal, "to promote the company's interests" – whereas different companies have different bottom lines and different interests.

Of course, reality is never like the rules – in reality, everyone in the company plays by their own rules, attempting to promote the interest of any of the following – or a combination:

  • Shareholders
  • Customers
  • Employees
  • His team
  • His manager
  • His friends
  • Himself

So in reality, of course there's a lot of chaos in a big company. And it doesn't help that the bigger it is, the harder it is to make sense of what's going on:

…There is a natural limit to what can be produced internally, however. Coase notices "decreasing returns to the entrepreneur function", including increasing overhead costs and increasing propensity for an overwhelmed manager to make mistakes in resource allocation. This is a countervailing cost to the use of the firm.

…Which explains why we aren't all employed by a single all-encompassing huge company.

But at least the rules of a large company frame things right – as cooperation more than competition. (Competition generally isn't an end – it's a means to ultimately force people to cooperate, and, as Coase points out, it only gets you this far.)

Of course, corporate rules also create competition – employees compete for raises, etc. But in practice, overall most would agree that it's much safer to trust co-workers than customers or vendors.

Why be a manager when you could be a founder or early employee at a startup? Here's the part of my answer that is based on economic fundamentals.

I specialize in areas requiring custom development by many people. Many people can only tightly cooperate under rules implying trust. Therefore they must not be customers and vendors, but coworkers, which leads to large firms. Such is The Nature of the Firm.

Of course there are problems that can be solved by a small group of people with mutual trust, without tightly-coupled, joint development with others – for example, the problems solved by Reddit and Dropbox. One reason I personally never looked that way is my aversion to business. Such is my own nature.

It just so happens that the nature of the firm suits my nature nicely – because there are situations where big companies are a good idea. When you can't buy and have to build, trust is fundamental to getting the job done.

UPDATE (December 9, 2011): just found an interesting analogy between company size and program size. Doing many things in one big program can be easier than using many small programs because of "transaction costs" – the cost of exchanging data between the programs.

Engineers vs managers: economics vs business

When chased by a bear, engineers want to run faster than the bear, managers want to run faster than you. This is known as "the best vs the good enough", and is a very common theme.

For instance, company A releases a good enough technology, company B releases the best technology on the market. B fails and A succeeds, because A releases earlier, or because A's technology is more compatible with the status quo, etc. Engineers will commonly feel sympathy for B, managers will applaud the shrewdness of A.

It's a common story and an interesting angle, but the "best vs good enough" formulation misses something. It sounds as if there's a road towards "the best" – towards the 100%. Engineers want to keep going until they actually reach 100%. And managers force them to quit at 70%:

There comes a time in the life of every project where the right thing to do is shoot the engineers and ship the fucker.

However, frequently the road towards "the best" looks completely different from the road to "the good enough" from the very beginning. The different goals of engineers and managers make their thinking work in different directions. A simple example will illustrate this difference.

Suppose there's a bunch of computers where people can run stuff. Some system is needed to decide who runs what, when and where. What to do?

  • An engineer will want to keep as many computers occupied at every moment as possible – otherwise they're wasted.
  • A manager will want to give each team as few computers as absolutely necessary – otherwise they're wasted.

These directions aren't just opposite – "as many as possible" vs "as few as necessary". They focus on different things. The engineer imagines idle machines longing for work, and he wants to feed them with tasks. The manager thinks of irate users longing for machines, and he wants to feed them with enough machines to be quiet. Their definitions of "success" are barely related, as are their definitions of "waste".

An engineer's solution is to have everybody submit their jobs to a queue managed by a central server. The Condor software is an example implementation; there are many others, with many subtle issues and differences – or you can roll your own. The upshot is that once a machine becomes idle, it can immediately yank a next job from the server's queue. As long as there's anything left to do, no machine is idle. This is the best possible situation.

A manager's solution is to give the ASIC team 12 servers: asic01, asic02, … asic12. QA gets 20 machines, qa01 … qa20, and so on. If you want to work on a machine, you ssh to it and you work. If you're an ASIC engineer and you want to use a QA machine, you can't log in. If users fight over their team's machines, they can go to their manager. If their manager decides the team needs more machines, he goes to upper management. Good enough.

The "good enough" is not 70% of "the best" – it's not even in the same direction. In fact, it's more like -20%: once the "good enough" solution is deployed, the road towards "the best" gets harder. You restrict access to machines, and you get people used to the ssh session interface, which "the best" solution will not provide.

Which solution is actually better? Tough question.

  • The manager's solution requires no programming or installation, and trivial administration.
  • The manager's solution requires no changes of habits (ssh is the standard).
  • The engineer's solution yields ~100% utilization, the manager's 50%, 30%, or 10%, depending.
  • The manager's solution will cause less headache to the managers, on average. Once in a while, you buy a team some more machines, and they shut up. The engineer's solution requires to set priorities at times of overload, and people will constantly argue about priority with managers.
  • The engineer's solution doesn't run things on machines that are already busy anyway. Users armed with ssh will tend to do this, possibly with horrible slowdowns due to swapping, etc.
  • The engineer's solution can provide the lowest latencies.
  • The engineer's solution makes it trivial to utilize new machines – no need to set up sessions, just submit jobs as previously. This is good – and bad: people will demand new machines instead of reducing the load.

So there are many conflicting considerations. Their importance varies between cases, and is very hard to measure.

"The best" solution is not necessarily the best – rather, it's what the mindset of looking for the best yields. Likewise, the "good enough" solution is not necessarily good enough – it's just what you come up with when you look for a good enough approach.

Obviously, portraying "engineers" and "managers" this way is a gross oversimplification, and most real people can look at things from both angles. I like this oversimplification for two reasons:

  1. It does help to understand and predict many common arguments and reactions.
  2. A person's perspective frequently does coincide with the title: engineers aim at the best, managers look for the good enough. Just the title is thus a good basis for prediction.

How does title affect judgement? There are arguments to the effect of "aiming at the good enough is wiser, and managers are in a better position to achieve wisdom", or vice versa. However, I don't believe either viewpoint is more correct than the other. Rather, they are biases. People in different positions acquire different biases, because they have different incentives and constraints.

The difference in incentives is that engineers are rewarded for success, while managers are rewarded for non-failure.

An exceptional engineer is unique and valuable, like a great chef – being an OK engineer is more like cooking for McDonald's. An engineer needs unusual success to be noticed. One reason such success is achievable is because he works within his area of expertise, on things that he understands. There's rarely a formal quality metric, but his deep understanding creates in him a sense of beauty that serves as his metric.

A manager is fine as long as the project doesn't fail. Most projects fail – if yours didn't, it's quite an achievement. One reason they fail is that there are a lot of ways to fail. Forget just one item in a lengthy checklist, and it won't matter how well the other 99 items are handled. People may not buy a car because there's no convenient place for a resting arm. A manager looks after a whole lot of items, usually without being able to understand most of them in any depth. He's inclined to look for simple pass/fail criteria.

If your success function is a continuous metric, you'll aim at the best. If it's a long list of Boolean values – V or X – you'll want "good enough" (V) at every entry. "The best" is just a costlier way to get a V, at a higher risk to end up with an X. The manager's outlook leads to binarization.

Another difference is that engineers and managers control different things – "One man's constant is another man's variable". In our example, the engineer doesn't decide how many machines to purchase or how to allocate them. If the number of machines is constant, what remains is to make the most of them. The manager, on the other hand, doesn't directly control utilization, and frequently doesn't know how to improve it. If utilization is constant, what remains is to properly ration it.

More generally, engineers tend optimize within constraints set by management, in part because they can't change those constraints. As to managers, they tend to settle for "good enough", in part because they can't optimize.

Which biases lead to more sensible solutions depends on the situation. However, there's a subtle reason to like the engineer's devotion to excellence, despite the manager's pressure for mediocrity. It's precisely when the manager is right in that excellence will not contribute to sales when the engineer contributes the most to users.

If an engineer's job is done so poorly as to make the product non-marketable, the product will fail and won't be used. When the quality along all dimensions passes the adoption threshold, every improvement along any dimension is effectively a gift to the users – who'd be using the product anyway. This is false where improvements contribute significantly to popularity, but true where they don't.

The improvements are inconsequential for business, but consequential economically. Economically, anything that helps people is good. For business, anything that creates profit is good. Often these definitions of "good" coincide, but sometimes they don't. In these cases, people who aim at excellence for excellence's sake help bridge the gap.

The manager's checklist approach – the business angle – ensures that the product is marketable, which is great because otherwise it won't get used. But the engineer's urge to optimize is closer to the really important goal: making the best of what we've got; this is the economics angle. So while it can lead to problems just like any other bias, it's likeable that way.

If a tree falls in a forest, it kills Schrödinger's cat

Schrödinger used to have this quantum cat which was alive and dead at the same time as long as nobody opened the box, and it was the very act of looking at the cat that made it either alive or dead. Now, I'm not sure about this quantum stuff, but if you ask me you'd always find a dead cat upon opening the box, killed by the act of not looking. In fact, if you open any random box nobody was looking into, chances are you'll find a dead cat there. Let me give an example.

I recently chatted with a former employee of a late company I'll call Excellence (the real name was even worse). Excellence was a company with offices right across the street that kept shrinking until the recent financial crisis. It then had to simultaneously fire almost all of its remaining employees, carefully selected as their best during the previous years when other employees were continuously fired at a lower rate. Giving us a whole lot of great hires, including MV, the co-worker in this story (though he was among those who guessed where things were going and crossed the street a bit earlier).

Rumors tell that to the extent possible, Excellence used to live up to expectations created by its name. In particular, without being encouraged or forced by customers to comply to any Software Development Methodology such as the mighty and dreadful CMM, they had (as CMM puts it) not only Established, but rigorously Maintained an elaborate design, documentation and review process which preceded every coding effort. Other than praise, MV had little to say about the process, except perhaps that occasionally someone would invent something awfully complicated that made code incomprehensible, having gone just fine through the review process because of sounding too smart for anyone to object.

Now, in our latest conversation about how things were at Excellence, MV told how he once had to debug a problem in a core module of theirs, to which no changes had been made in years. There, he stumbled upon a loop searching for a value. He noticed that when the value was found, the loop wouldn't terminate – a continue instead of a break kind of thing. Since the right value tended to be found pretty early through the loop, and because it was at such a strategic place, test cases everyone was running took minutes instead of seconds to finish. Here's a dead cat in a gold-plated box for ya, and one buried quite deeply.

My own professional evolution shaped my mind in such a way that it didn't surprise me in the slightest that this slipped past the reviewer(s). What surprised me was how this slipped past the jittery bodybuilder. You see, we have this Integration Manager, one of his hobbies being bodybuilding (a detail unrelated, though not entirely, to his success in his work), and one thing he does after integrating changes is he looks at the frame rate. When the frame rate seems low, he pops up a window with the execution profile, where the program is split into about 1000 parts. If your part looks heavier than usual, or if it's something new that looks heavy compared to the rest, it'll set him off.

So I asked MV how come that the cat, long before it got dead and buried, didn't set off the jittery bodybuilder. He said they didn't have one for it to set off. They were translating between the formats of different programs. Not that performance didn't matter – they worked on quite large data sets. But to the end user, automatic translation taking hours had about the same value as automatic translation taking seconds – the alternative was manual translation taking weeks or months. So they took large-scale performance implications of their decisions into account during design reviews. Then once the code was done and tested, it was done right, so if it took N cycles to run, it was because it took N cycles to do whatever it did right.

And really, what is the chance that the code does everything right according to a detailed spec it is tested against, but there's a silly bug causing it to do it awfully slowly? If you ask me – the chance is very high, and more generally:

  • Though not looking at performance followed from a reasonable assessment of the situation,
  • Performance was bad, and bad enough to become an issue (though an unacknowledged one), when it wasn't looked at,
  • Although the system in general was certainly "looked at", apparently from more eyes and angles than "an average system", but it didn't help,
  • So either you have a jittery bodybuilder specifically and continuously eyeballing something, or that something sucks.

Of course you can save effort using jittery automated test programs. For example, we've been running a regression testing system for about a year. I recently decided to look at what's running through it, beyond the stuff it reports as errors that ought to be fixed (in this system we try to avoid false positives to the point of tolerating some of the false negatives, so it doesn't complain loudly about every possible problem). I found that:

  • It frequently ran out of disk space. It was OK for it to run out of disk space at times, but it did it way too often now. That's because its way of finding out the free space on the various partitions was obsoleted by the move of the relevant directories to network-attached storage.
  • At some of the machines, it failed to get licenses to one of the compilers it needed – perhaps because the env vars were set to good values with most users but not all, perhaps because of a compiler upgrade it didn't take into account. [It was OK for it to occasionally fail to get a license (those are scarce) - then it should have retried, and at the worst case report a license error. However, the compiler error messages it got were new to it, so it thought something just didn't compile. It then ignored the problem on otherwise good grounds.]
  • Its way of figuring out file names from module names failed for one module which was partially renamed recently (don't ask). Through an elaborate path this resulted in tolerating false negatives it really shouldn't.

And I'm not through with this thing yet, which to me exemplifies the sad truth that while you can have a cat looking at other cats to prevent them from dying, a human still has to look at that supervisor cat, or else it dies, leading to the eventual demise of the others.

If you don't look at a program, it rots. If you open a box, there's a dead cat in it. And if a tree falls in a forest and no one is around to hear it, it sucks.

Applied mathematics in business consulting

As a part of my continuous moral degradation and the resulting increasing alignment with the forces of Evil, I'm sharing an apartment with a gal who used to work in HR assessment. She recently got me acquainted to a friend of hers, BC, who works as a business consultant (names have been changed to protect the guilty).

BC's primary educational background is in applied mathematics. Having put the math they teach in CS departments to relatively few uses as a working programmer, I asked her about the uses of applied mathematics in business consulting. BC cited the following two examples.

The first example involves compensation and its dependence on key performance indicators, affectionately known as KPI and estimated by HR assessors. One way of looking at this dependence is to consider how it affects compensation over time as an employee's competence increases.

A psychological discussion is then possible on the relative merits of the different graphs plotting the compensation functions f(KPI). If f is linear (has a constant derivative), we make people struggle equally hard at every step. If f's derivative increases over time (for instance, when f is exponential), we make elevation hard at first and then increasingly easy. If f's derivative decreases over time (for example, if f is logarithmic), we make the last mile the hardest. And so on.

Through a psychological discussion of this sort, someone in the consulting company decided that what was really needed in some case or other was an S-shaped curve. The problem was that you couldn't just draw an S-shaped curve – the plotting had to be done in Excel according to a formula; an S-shaped curve which just blithely goes through arbitrary points doesn't cut it when you deliver a Compensation Model. But how do you make a formula to go up slowly, than fast, than slowly again? Exponents don't work. Logarithms don't work. A sine does look like an S, but it's a wrong kind of S, somehow. What to do?An S-shaped curve

Enter BC with 6 years of math studies under her belt. A compact yet impressive formula is spelled out, and – presto! – Excel renders an S-shaped curve. (I guess she used the sigmoid function but I didn't check.) The formula brought delight to management and fame to BC, and compensation payments issued according to its verdict keep adding up to scary numbers (BC's agency works with some really big companies).

The second example involves the compensation of managers. Naturally, a good manager near the bottom is worth less to the firm than a bad manager near the top, and therefore the compensation function should now depend on the manager's level in the hierarchy as well as his KPI (or better). Equally naturally, the numbers coming out of the compensation spreadsheet will under no circumstances arise through an externally conducted study of their psychological implications or any similarly unpredictable device. The numbers will result from nothing but the deep understanding of the organization possessed by the top management.

The development process of the managerial compensation function is thus complementary to that of the employee compensation function. Instead of producing numbers from a beautiful spreadsheet, what is needed here is to produce a beautiful spreadsheet from the numbers specified by the top management. The spreadsheet then promptly generates back these exact numbers from the input parameters.

The purpose of the spreadsheet is to relieve the top managers from the need to justify the numbers to their underlings. In order to guarantee that they are relieved from this need, the formula should not contain terms such as 1/(level^2), which could raise questions such as why not use 1/level, why not use 1/log(level) and other questions along these lines. Rather, the formula should contain terms which could raise no questions at all simply due to their size and shape.

BC faced this problem at an early stage of her career, and despite the natural stress, came up with an interesting Compensation Model, its key term being e raised to the power of something unspeakably grand, combining the trademark Gaussian look and feel with an obvious ability to deter the skeptics from asking questions. The only problem with that term was the very source of its utility, namely, the fact that it evaluated to 0 for all values of KPI and hierarchy level.

The deadline being close, BC told the manager of the consulting project in question about the status of her work and expressed her doubts regarding the delivery of the Computational Model. The manager told her that she just doesn't get it, does she, it's great, the right numbers come out and that's all there is to it and we should send it right away. And so they did, to everyone's complete satisfaction.

Her command of applied mathematics aside, BC is generally quite powerful.

For instance, she once got invited to consult some government agency about a project of theirs, while being on vacation and without it being explained to her that she was about to attend a formal meeting with the whole team. In order to maintain the reputation of the guy who somewhat clumsily brought her in, she had to improvise.

The project, worthy of a government agency, was related to some sort of war on corruption, the unusual thing being that they wanted to fight the corruption of other governments. Their weapon of choice was the training of representatives of another government, financed by the other government, in their supposedly superior methods of governance. While the general concept was impressive on many levels, the details were unclear.

BC had to speak, and she spoke based on a principle appearing in a book by some McKinsey alumni (she didn't tell its name nor generally recommended it): whatever you tell people, it should contain 3 main points. Possibly 4. But preferably 3. More is overwhelming and less is boring. So she said: "At its core, your project is about teaching people. It is therefore essential to clearly understand three things:

  • Whom you're teaching,
  • What you're teaching them,
  • And how you're teaching it."

And they started writing it down.

I asked BC whether there was some way to unleash her on the company employing me so that she grinds a few bullet points into them (a handsome Compensation Model being as good a place to start as any). She said something to the effect of "it only works on the weak-minded"; it was apparent, she said, that the government agency in question had little previous exposure to consulting.

BC says she (still) believes that business consulting is meaningful and valuable, which sounds paradoxically at this point. But, looked at from another angle, it really isn't. Don't view her stories as ones undermining the credibility of business consulting but rather as ones building her own credibility as a person aware of the actual meaning of things and willing to sincerely share that understanding (how many people would instead say that they Developed Cutting-Edge Compensation Models?) If she says there's meaning to it, perhaps there is.

Lack of wealth through lack of empathy

A few days ago a coworker stepped into the room of our Integration Manager, with his iPhone in his hand and his face glowing with happiness, telling he has this great new game on his phone. He asked for a permission to take a picture of the Integration Manager, and did so to the sound of dramatic, though somewhat repetitive, music. He then marked the areas around the eyes and the mouth of his subject.

Then the game started. The guy pressed some buttons, delivering virtual blows to the photographed subject, who couldn't really fight back according to the rules of the "game". The face on the photograph got covered with blood and wounds (especially the eyes and the mouth), and the speaker yelled out something designed to convey pain. And the guy stood there and kept punching and laughed his ass off.

Then he talked about how iPhone was this great thing and how iPhone app developers got rich. And I told him in a grim voice to go ahead and get rich off iPhone apps and he sensed disrespect, to him or to the iPhone or both. But mainly it wasn't disrespect, it was desperation. I told him to go ahead and do these apps because he probably can, and I certainly can't, so I have no choice but leave all this wealth for him to collect.

Take this punching app. Where do I fit in?

For instance, I know enough computer vision to eliminate the need to manually select the eyes and the mouth. You find the eyes using circular Hough transform, and then you get a good idea where to look for the mouth and then it can't be too hard. How much value would this automation add? The happy user I observed didn't seem bothered by the need to select, worse, I suspect having to do it made him happier – this way he actually worked to deliver his punches.

And then what I couldn't do, and I couldn't do this for the life of me, is to invent this app in the first place. It's just something that I don't think I'll ever understand, something worse than, I dunno, partial differential equations which you can keep digging into, something more like the ability to distinguish between colors, which is either there or not.

Could someone PLEASE explain just how can such a profoundly idiotic activity yield any positive emotions in a human soul?

Perhaps it's because our Integration Manager is a bodybuilder, with bulging biceps and protein milkshakes and stuff, so there's no other way to beat him up for that guy? Well, he could probably beat me up, and he still enjoyed going through the process with me. So this just shows how childish my attempt to dissect the psychology of the phenomenon is – I'm not even close. How could I ever think of something I can't even begin understanding after I've already seen it?

The serious, or should I say sad, side of this is that to develop something valuable to users, you need empathy towards these users, and the way empathy works is through identifying with someone, and so the best chance by far to develop something valuable is to develop something for yourself. But someone like me, who bought his first PC in 2007 and uses a cell phone made in 2004, with no camera, no Internet connection, no nothing, simply can't develop something for himself because he clearly just doesn't need software.

The only chance for a programmer who uses little software except for development tools is to develop development tools. Take Spolsky, for instance – here is a programmer who is, by the programmers' standard, extremely user-aware and user-centric, as evident in his great UI book, among other things, and yet he eventually converged to developing software for programmers, having much less success with end-user products.

The trouble with development tools is that the market is of course saturated, with so many developers with otherwise entrepreneurial mindsets being limited by their lack of empathy to this narrow segment of software. This is why so many programs for programmers have the price of zero – not because copying software has a cost near zero; developing software has a cost far from zero so there aren't that many free face punching apps, but with software for developers, there's unusually fierce empathy-driven competition.

So every time I see an inexplicably moronic hit app, I sink into sad reflections about my destiny of forever developing for developers, being unable to produce what I can't consume and watching those with deeper understanding of the human nature reaping all the benefits of modern distribution channels.

P.S. To the fellow programmer who, as it turns out, uses FarmVille – you broke my heart.

API users & API wrappers

Suppose you have a sparse RAM API, something along the lines of:

  • add_range(base, size)
  • write_ram(base, bytes)
  • read_ram(base, size)

People use this API for things like running a simulated CPU:

  1. define the accessible memory with add_range()
  2. pass the initial state to the simulator with write_ram()
  3. run the simulation, get the final state with read_ram()

Suppose this API becomes a runaway success, with a whopping 10 programmers using it (very little irony here, >95% of the APIs in this world are used exclusively by their designer). Then chances are that 9 of the 10 programmers are API users, and 1 of them is an API wrapper. Here's what they do.

API users

The first thing the first API user does is call you. "How do I use this sparse thing of yours?" You point him to the short tutorial with the sample code. He says "Uhmm. Errm…", which is userish for "Come on, I know you know that I'm lazy, and you know I know that docs lie. Come over here and type the code for me." And you insist that it's actually properly documented, but you will still come over, just because it's him, and you personally copy the sample code into a source file of his:

add_range(0x100000, 6) # input range
add_range(0x200000, 6) # output range
write_ram(0x100000, "abcdef")
# run a program converting the input to uppercase
print read_ram(0x200000, 6) # should print "ABCDEF"

It runs. You use the opportunity to point out how your documentation is better than what he's perhaps used to assume (though you totally understand his frustration with the state of documentation in this department, this company and this planet). Anyway, if he has any sort of problem or inconvenience with this thing, he can call you any time.

The next 8 API users copy your sample code themselves, some of them without you being aware that they use or even need this API. Congratulations! Your high personal quality standards and your user-centric approach have won you a near-monopoly position in the rapidly expanding local sparse RAM API market.

Then some time later you stumble upon the following code:

add_range(0x100000,256)
add_range(0x200000,1024)
add_range(0x300000,1024)
...
add_range(0xb00000,128)
...
add_range(0x2c00000,1024)
...

Waitaminnit.

You knew the API was a bit too low-level for the quite common case where you need to allocate a whole lot of objects, doesn't matter where. In that case, something like base=allocate_range(size) would be better than add_range(base,size) – that way users don't have to invent addresses they don't care about. But it wasn't immediately obvious how this should work (Nth call to allocate_range() appends a range to the last allocated address, but where should the first call to allocate_range() put things? What about mixing add_range() and allocate_range()? etc.)

So you figured you'd have add_range(), and then whoever needed to allocate lots of objects, doesn't matter where, could just write a 5-line allocate_range() function good enough for him, though not good enough for a public API.

But none of them did. Why? Isn't it trivial to write such a function? Isn't it ugly to hard-code arbitrary addresses? Doesn't it feel silly to invent arbitrary addresses? Isn't it actually hard to invent constant addresses when you put variable-sized data there, having to think about possible overlaps between ranges? Perhaps they don't understand what a sparse RAM is? Very unlikely, that, considering their education and experience.

Somehow, something makes it very easy for them to copy sample code, but very hard to stray from that sample code in any syntactically substantial way. To them, it isn't a sparse RAM you add ranges to. Rather, they think of it as a bunch of add_range() calls with hexadecimal parameters.

And add_range() with hex params they promptly will, just as it's done in the sample. And they'll complain about how this API is a bit awkward, with all these hex values and what-not.

API wrappers

If there's someone who can see right through syntax deep into semantics, it's the tenth user of your API, or more accurately, its first wrapper. The wrapper never actually uses an API directly in his "application code" as implied by the abbreviation, standing for "Application Programming Interface". Rather, he wraps it with another (massive) layer of code, and has his application code use that layer.

The wrapper first comes to talk to you, either being forced to use your API because everybody else already does, or because he doesn't like to touch something as low-level as "RAM" so if there's already some API above it he prefers to go through that.

In your conversation, or more accurately, his monologue, he points out some admittedly interesting, though hardly pressing issues:

  • It's important to be able to trick a program using the sparse RAM API into allocating its data in specific address ranges, so that the resulting memory map is usable on certain hardware configurations and not just in simulations.
  • In particular, it is important to be able to extract the memory map from the section headers of executables in the ELF and COFF format.
  • Since add_range() calls are costly, and memory map formats such as the S-Record effectively specify a lot of small, adjacent ranges, there is a need for a layer joining many such ranges.
  • An extensible API for the parsers of the various memory map formats is needed.

When you manage to terminate the monologuish conversation, he walks off to implement his sparse RAM API on top of yours. He calls it SParser (layer lovers, having to invent many names, frequently deteriorate into amateur copywriters).

When he's done (which is never; let's say "when he has something out there"), nobody uses SParser but him, though he markets it heavily. Users won't rely on the author who cares about The Right Thing but not about their problems. Other wrappers never use his extra layers because they write their own extra layers.

However, even with one person using it, SParser is your biggest headache in the sparse RAM department.

For example, your original implementation used a list of ranges you (slowly) scanned through to find the range containing a given address. Now you want to replace this with a page table, so that, given an address, you simply index into a page array with its high bits and either find a page with the data or report a bad address error.

But this precludes "shadowing", where you have overlapping segments, one hiding the other's data. You thought of that as a bug in the user code your original implementation didn't detect. The wrapper thought it was a feature, and SParser uses it all over to have data used at some point and then "hidden" later in the program.

So you can't deploy your new implementation, speeding up the code of innocent users, without breaking the code of this wrapper.

What to do

Add an allocate_range() API ASAP, update the tutorial, walk over to your users to help replace their hex constants with allocate_range() calls. Deploy the implementation with the page table, and send the complaining wrapper to complain upwards along the chain of command.

Why

Your users will switch to allocate_range() and be happy, more so when they get a speed-up from the switch to page tables. The wrapper, constituting the unhappy 10% of the stakeholders, will have no choice but fix his code.

Ivan drank half a bottle of vodka and woke up with a headache. Boris drank a full bottle of vodka and woke up with a headache. Why drink less?

Users are many, they follow a predictable path (copy sample code) and are easily satisfied (just make it convenient for them to follow that path). Wrappers are few, they never fail to surprise (you wouldn't guess what and especially why their layers do), and always fail to be satisfied (they never use APIs and always wrap them). Why worry about the few?

The only reason this point is worth discussing at all is that users offend programmers while wrappers sweet-talk them, thus obscuring the obvious. It is natural to feel outrage when you give someone an add_range() function and a silly sample with hex in it, and not only do they mindlessly multiply hex numbers in their code, but they blame you for the inconvenience of "your API with all the hex in it". It is equally natural to be flattered when someone spends time to discuss your work with you, at a level of true understanding ("sparse RAM") rather than superficial syntactic pattern matching ("add_range(hex)").

He who sees through this optical illusion will focus on the satisfaction of the happy many who couldn't care less, securing the option to ignore the miserable few who think too much.

Digital asses in the computing industry

Ever noticed how academic asses are analog and industrial asses are digital? It's legitimate to not know whether P equals NP, or to not know what x is if x*2=y but we don't know y, for that matter. But it isn't legitimate to not know how many cycles, megabytes or – the king of them all – man-months it will take, so numbers have to be pulled out of one's ass.

The interesting thing is that the ass adapts, that the numbers pulled out of this unconventional digital device aren't pure noise. Is it because digital asses know to synchronize? Your off-by-2-months estimation is fine as long as other estimations are off by 5. But it's not just that, there must be something else, a mystery waiting to be discovered. We need a theory of computational proctology.

Ever noticed how painful the act of anal estimation is for the untrained, um, mind, but then eventually people actually get addicted to it? Much like managers who learn that problems can be made to go away by means such as saying a firm "No", without the much harder process of understanding the problem, not to mention solving it? Anal prophecy is to the technical "expert" the same raw enjoyment that the triumph of power over knowledge is to the manager. "Your powers are nothing compared to mine!"


There once was a company called ArsDigita (I warmly recommend the founder's blog and have his Tenth Rule tattooed all over my psyche), a name I tend to misread as "ArseDigital" – a tribute to an important method of numerical analysis and estimation in the computing industry.