A manager's pitfall: striving to "add value"

There's this guy I work with who has a head the size of a small planet. In that head, dozens of design decisions big and small are made every day. Hundreds of options clash in his mind: which gives us the best trade-off between 3 different desirable things? Or 5 things, or 12, as the case may be?

And the thing about this guy is, he'll always make the decision, because you must, because deadlines. So not the kind of guy who just gets stuck in the multitude of possibilities, nope, not him. BUT he's normally unhappy with the decision, because you see, there wasn't time to REALLY evaluate it PROPERLY.

So we've been working with this guy, and he'd often ask me to help decide something at his end.

At first I'd ask about the details, and I'd propose solutions. And he didn't seem very happy with that, and eventually he got seriously pissed off. That was when I asked for a really big new feature, and he said it wasn't doable, "forget it."

So I said, "let's see though, what makes this so hard? Because I'm willing to throw a lot of it out and do just the most bare-bones version. Maybe then it'd be doable." So I was helping him, I was willing to ask for less, right? So I sure didn't expect the outcome, which was, he rather forcefully said he couldn't work like that any longer, with no stable specs and no real schedule and it's just him doing this after all AND…

At this point I sorta gathered from the form and substance of his speech that I should back off, RIGHT NOW. I said, "look, if you add this, all I can say is, the whole thing will be much better. But if it can't be done, fine, we can live without it. It's entirely your call and I have nothing to add."

And then he did it. Not some bare-bones, half-assed version, mind you, he did a pretty impressive full-blown thing. Then he redid it to make it better still.

So clever me thinks to myself, aha! I see, so what annoyed him wasn't how I asked for this huge feature in itself. What annoyed him was me demanding him to tell why it was hard, so I could break it up into parts, and tweak it, and basically solve this myself. He didn't really mind solving it if he did it, provided that

  • It was a request and not a "requirement" – because someone like me who doesn't really understand what goes into it has no right to just require it. You're not entitled to things you can't even comprehend, asshole!
  • …and I wouldn't try to "help" him by attempting to remedy my ignorance at his expense, questioning him on the details and then hastily "making it easier" for him, without thinking deeply enough whether making it easier was even NEEDED, and so  tricking him into making some pale no-good version of it! And then HE, because of ME, because of my impatient superficial "help", will have done a bad job! No, he doesn't want it easy, he wants it done right, so take a hike and give him time to decide what the right trade-off is, where to work harder and get more done and where to do less.

OK, so that's how it's gonna work from now on, I thought. From then on, if he wanted me to help him decide something, I'd just ask him what options he had in mind, regardless of what I could come up with myself. And I'd say this one sounds good.

This sort of worked, but he still didn't seem quite happy. And at one point he got pissed off and demanded explanations why I thought his option A was better than his option B. And I gave my explanations, and he thought they were shit. And I said, you know, A and B are both alright. Pick whichever you like better. "What?! So why did you just say that A was better?" "I dunno, I was wrong, you convinced me, it's a tough call."

Now I finally got it! So the way it works is, he thinks something through, for hours and days. If it's still a close call, he comes to me, whom he genuinely respects, and who's got the nominally higher seniority. What he comes to me for is superior wisdom. What he doesn't realize, out of excessive humility, is that there's no way I'll be better at deciding it than him, because he's better at it than me, and he's been thinking about it for so much longer.

And he only comes to me with what to him are the hardest things, guaranteeing that I won't be able to help, not really. So trying will just piss him off because I'll necessarily be suggesting some pretty dumb and superficial decision procedure, which is like helping Michelangelo with a sculpture by lending him a sledgehammer to just remove all that big chunk of stone over there.

So from then on I'd listen to his doubts, and agree that it's hard because of this thing we can't measure and that other thing we can't know, and say optimistically that either way I think it'll come out alright but it's his call, really. And he appeared very happy with my careful supervision of his work.

Bottom line for me was, I did very little hard thinking and a pretty cool project got done, I think, and the guy was happy, at least as much as he can be in this imperfect world. What's not to like?

The thing many of us managers don't like in such cases is, we seem to have added no value. Where's my chunk of value?

"He who does not work doesn't get to eat", they taught me in the USSR. "To capture value, add value" would be the translation of this slogan into MBA-speak. "By the sweat of your brow you will eat your food", says the Holy Bible. None of this prepares you for a managerial role, where not doing shit is usually better than trying too hard.

A manager on a mission to add value usually thinks that it's his job to have all the good and important ideas. Corollary: someone else having many of the good and important ideas means you, the manager, are bad at your job. Will you accept this and suffer silently, or will you convince yourself that the report's ideas aren't that good? You don't want to find out. I say, don't put yourself in this situation in the first place, don't try hard to have the good ideas, look at your ability to not do shit in return for a nice compensation as both your new virtue and your immense luck, and relax.

If you're really addicted to making tangible contributions, I suggest to do what I've been doing in managerial roles – do some of the work yourself. Now, this is, without doubt, terribly irresponsible, because it makes me unavailable at unpredictable moments. When shit hits the fan and I must attend to it in my contributor's role, it prevents me from attending to the never-ending fan-hitting shit that the people I manage deal with.

There's also an advantage, and that is that I know how much things really suck in the trenches. And that's good, and anyone overseeing less than 50-100 people should stay current that way, I think. But don't "stay current" by working on urgent mission-critical shit (as I often do, because I suck.) Pick something unimportant instead.

Anyway, working myself is immensely better than trying to have the good ideas myself, I believe. This can result in not being there when they need me, but it never results in being there when they don't, which is worse.

If it's not the deep thinking, then what do managers do, except for sitting on their asses? I started putting it down, and it didn't come out very well, and the reason ought to be that, well, I don't really get it at this point. I'm a middling manager, whose main skill is attracting strong people and then taking credit for their work. And this one managerial skill compensates for the underdevelopment of the rest so well that they, unfortunately, remain underdeveloped.

So until my understanding of the managerial role improves, the one thing I will say here is, many people actually don't do shit most of the time and it's fine – say, lifeguards. Why do you need them? Just in case. So that's one way I think about myself - I'm like a lifeguard, except for the physique. Maybe I should use the time freed up by not having to do deep thinking to work on my pectoral muscles, so that one day I could star in "Programmer Watch", a "Baywatch" rip-off about R&D management. But I keep my hopes low on that one.

I say, quit "adding value", fellow managers. Just sit back and take credit for others' work. They'll thank you for it.

P.S. angry hamster – this one's for you.

The C++ FQA is on GitHub

I have become burned out in my role as guardian of the vitriol, chief hate-monger… and most exalted screaming maniac.

– Alan Bowden, then the moderator of the UNIX-HATERS mailing list

The C++ FQA's "source code" (that is, the ad-hoc markup together with the ugly code converting it to HTML) is now on GitHub.

I decided to write the FQA around 2006. I felt that I should hurry, because I'd soon stop caring enough to spend time on it, or even to keep enough of it all in my head to keep the writing technically correct.

And indeed, over the years I did mostly stop caring. Especially since I managed to greatly reduce my exposure to C++. The downside, if you can call it that, is that, in all likelihood, I'll forever know much less about C++11 than I knew about C++98. I mean, I can use auto and those crazy lambdas and stuff, but I don't feel that understand all this horror well enough to write about it.

So I invite better informed people to do it. I was looking for a single person to take over it for a while. The trouble is, most people disliking C++ choose to spend less time on it, just like me. And since writing about C++'s flaws is one way of spending time on it, people who want to do it, almost by definition, don't really want to do it. Bummer!

However, doing a little bit of editing here and there might be fun, it wouldn't take too much of your time, and you'd be creating a valuable and scarce public good in an environment where naive people are flooded with bullshit claims of C++11 being "as fast as C and as readable as Python" or some such.

I don't think my original tone has to be preserved. My model at the time was the UNIX-HATERS style, and in hindsight, I think maybe it'd actually turn out more engaging and colorful if I didn't write everything in the same "I hate it with the fire of a thousand suns" tone.

Also, and I think this too was needlessly aped from UNIX-HATERS – I keep insisting on this good-for-nothing language being literally good for nothing, and while I don't outright lie to make that point, I'm "editorializing", a.k.a being a weasel, and why do that? I mean, it's a perfectly sensible choice for many programmers to learn C++, and it's equally sensible to use it for new projects for a variety of reasons. Do I want to see the day where say Rust scrubs C++ out of the niches it currently occupies most comfortably? Sure. But being a weasel won't help us get there, and for whomever C++ is the best choice right now, persuading them that it isn't on general grounds can be a disservice. So the "good for nothing" attitude might be best scrapped as well, perhaps.

On the other hand, there was one good thing that I failed to copy from UNIX-HATERS - how it was a collaboration of like-minded haters of this ultimately obscure technical thing, where human warmth got intermixed with the heat of shared hatred. Maybe it works out more like that this time around.

Anyway, my silly markup syntax is documented in the readme file, I hope it's not too ugly, and I'm open to suggestions regarding this thing, we could make it more civilized if it helps. In terms of priorities, I think the first thing to do right now is to update the existing answers to C++11/14, and update the generated HTML to the new FAQ structure at isocpp.org; then we could write new entries.

You can talk to me on GitHub or email Yossi.Kreinin@gmail.com



Fun with UB in C: returning uninitialized floats

The average C/C++ programmer's intuition says that uninitialized variables are fine as long as you don't depend on their values.

A more experienced programmer probably suspects that uninitialized variables are fine as long as you don't access them. That is, computing c=a+b where b is uninitialized is not harmless even if you never use c. That's because the compiler could, say, optimize away the entire block of code surrounding c=a+b under the assumption that c=a+b, where b is proven to be always uninitialized, is always undefined behavior (UB). And if it's UB, the only way for the program to be correct is for this code to be unreachable anyway. And if it's unreachable, why waste instructions translating any of it?

However, the following code looks like it could potentially be OK, doesn't it?

float get(obj* v, bool* ok) {
  float c;
  if(v->valid) {
    *ok = true;
    c = v->a + v->b;
  else {
   *ok = false; //not ok, so don't expect anything from c
  return c;

Here you return an uninitialized c, which the caller shouldn't touch because *ok is false. As long as the caller doesn't, all is well, right?

Well, it turns out that even if the caller does nothing at all with the return value – ever, regardless of *ok – the program might bomb. That's because c could be initialized to a singaling NaN, and then say on x86, when the fstp instruction is used to basically just get rid of the return value, you get an exception. In release mode but not in debug mode, some of the time but not all the time. This gives you this warm, fuzzy WTF feeling when you stare at the disassembled code. "Why is there even a float here in the first place?!"

How much uninitialized data is shuffled around by real-world C programs? A lot, I wager – likely closer to 95% than to 5% of programs do this. Otherwise Valgrind would not go to all the trouble to not barf on uninitialized data until the last possible moment (that moment being when a branch is taken based on uninitialized data, or when it's passed to a system call; to not barf then would require some sort of a multiverse simulation approach for which there are not enough computing resources.)

Needless to say, most programs enjoying ("enjoying"?) Valgrind's (or rather memcheck's) conservative approach to error reporting were written neither in assembly which few use, nor in, I dunno, Java, which won't let you do this. They were written in C and C++, and most likely they invoke UB.

(Can you touch uninitialized data in C without triggering UB? I seriously don't know, I'm not a language lawyer. Being able to do this is actually occasionally useful for optimization. Integral types for instance don't have anything like signaling NaNs so at the assembly language level you should be fine. But at the C level the compiler might get needlessly clever if it manages to prove that the data is uninitialized. My own intuition is it can never prove squat about data passed by pointer because of aliasing and so I kinda assume that if I get a buffer pointing to some data and some of it is uninitialized I can do everything to it that I could in assembly. But I'm not sure.)

What a way to make a living.



Accidentally quadratic: rm -rf??

This is a REALLY bad time. Please please PLEASE tell me I'm not fucked…

I've accidentally created about a million files in a directory. Now ls takes forever, Python's os.listdir is faster but still dog-slow – but! but! there's hope – a C loop program using opendir/readdir is reasonably fast.

Now I want to modify said program so that it removes those files which have a certain substring in their name. (I want the files in that directory and its subdirectories, just not the junk I created due to a typo in my code.)


O(N^2) is easy enough. The unlink function takes a filename, which means that under the hood it reads ALL THE BLOODY FILE NAMES IN THAT DIRECTORY until it finds the right one and THEN it deletes that file. Repeated a million times, that's a trillion operations – a bit less because shit gets divided by 2 in there, but you get the idea.

Now, readdir gives me the fucking inode number. How the FUCK do I pass it back to this piece of shit operating system from hell, WITHOUT having it search through the whole damned directory AGAIN to find  what it just gave me?

I would have thought that rm -rf for instance would be able to deal with this kind of job efficiently. I'm not sure it can. The excise function in the guts of coreutils for instance seems to be using unlinkat which gets a pathname. All attempts to google for this shit came up with advice to use find -inode -exec rm or some shit like that, which means find converts inode to name, rm gets the name, Unix converts the name back to inode…

So am I correct in that:

  • Neither Unix nor the commercial network filers nor nobody BOTHERS to use a hash table somewhere in their guts to get the inode in O(1) from the name (NOT A VERY HARD TASK DAMMIT), and
  • Unix does not provide a way to remove files given inode numbers, but
  • Unix does unfortunately makes it easy enough (O(1)) to CREATE a new file in a directory which is already full of files, so that a loop creating those bloody files in the first place is NOT quadratic?? So that you can REALLY shoot yourself in the foot, big time?

Please tell me I'm wrong about the first 2 points, especially the second… Please please please… I kinda need those files in there…

(And I mean NOTHING, nothing at all works in such a directory at a reasonable speed because every time you need to touch a file the entire directory is traversed underneath… FUUUUUCK… I guess I could traverse it in linear time and copy aside somehow though… maybe that's what I'd do…)

Anyway, a great entry for the Accidentally Quadratic blog I guess…

Update: gitk fires up really damn quickly in that repository, showing all the changes. Hooray! Not the new files though. git citool is kinda… sluggish. I hope there were no new files there…


  • `find fucked-dir -maxdepth 1 -name "fuckers-name*" -delete` nuked the bastards; I didn't measure the time it took, but I ran it in the evening, didn't see it finish in the half an hour I had to watch it, and then the files were gone in the morning. Better than I feared it would be.
  • As several commenters pointed out, many modern filesystems do provide O(1) or O(log(N)) access to files given a name, so they asked what my file system was. Answer: fucked if I know, it's an NFS server by a big-name vendor.
  • Commenters also pointed out how deleting a file given an inode is hard because you'd need a back-link to all the directories with a hard link to the file. I guess it makes sense in some sort of a warped way. (What happens to a file when you delete it by name and there are multiple hard links to it from other directories?.. I never bothered to learn the semantics under the assumption that hard links are, um, not a very sensible feature.)

"Information asymmetry" cuts both ways

They pretend to pay us, and we pretend to work.

 – A Soviet-era political joke according to Wikipedia, though I've only seen it in an English text describing the C++ object model

Having been born in the USSR, #talkpay – people disclosing their compensation on May 1 - put a series of smiles on my face, expressing emotions that I myself don't quite understand.

The numbers certainly left me pleased with the state of the oppressed proletariat (web designers, computer programmers, etc.) in the rotting capitalist society. I do hope that the $125K/year proletarian will not feel resentment towards the $128K/year guy in the next cubicle. I hope as well that the second guy's manager won't deny him a promotion so as to avoid further offending the first guy.

"Hope", yes; "count on" – no, which is why I won't be tweeting about my compensation. Because my numbers are almost certainly either lower or higher than yours, and why needlessly strain our excellent relationship – am I right, dear reader? (The other reason is having no Twitter account.)

So, yeah, a series of smiles and a range of emotions. But today I want to focus on one particular bit – the one mentioned by Patrick McKenzie:

Compensation negotiations are presently like a stock exchange where only your counterparty can see the ticker and order book. You’d never send an order to that exchange — it would be an invitation to be fleeced. “I happen to have a share of Google and want to sell it. What’s it go for?” “Oh, $200 or so.” “Really? That feels low.” “Alright, $205 then, but don’t be greedy.”

The spot price of Google when I write this is $535. Someone offering $205 for GOOG would shock the conscience. …folks have received and accepted employment offers much worse than that, relative to reasonably achievable market rates.

All very true, except that with a share of Google, they're all the same and you arguably know what you're buying, and with an employee's services, they aren't and you don't.

There seems to have been a string of Nobel Prize-winning economists who founded "X economics" – such as "behavioral economics" or "information economics." The common thing between "X economists" is, they all (correctly) tell me that I'm irrational and misinformed. But then they conclude that it's them and not me who should manage my money. I believe the latter does not follow from the former. But I'll save that discussion for another day.

In particular, Stiglitz, the information economics guy, opened his Nobel Prize lecture with an explanation of his superiority over other economists. This state of things arose, according to him, due to having grown up in a small town in Indiana, where things didn't work as predicted by standard economic models. Specifically, the proletariat got shafted, prompting Stiglitz to come up with formulas predicting how badly one can be swindled due to his ignorance of the market conditions.

Which is probably very true and relevant, but it cuts both ways. A guy paying me to work on a conveyor belt could measure my productivity perfectly, and probably knew the market for my labor better than I did – clearly he had an advantage. A guy paying me to program, however, might know more about wages than I do. But he certainly doesn't have the foggiest idea what I'm doing in return for his money, even if he's a better programmer than I am.

Basically the "knowledge worker's" contract is something like this:

We'll give you a precisely defined salary and a benefits package. In return, we request that you handle some problems that we're told we're having. We hope that you'll solve them well enough to prevent us from having to know what they were in the first place. Please help us maintain the feeling that we own an asset similar to land or gold or something. Please keep the realization that we're more like the operator of a flying circus than a landowner from disturbing us. And certainly, never, ever ask us what to do with any of the moving pieces of this flying circus, because we seriously have no idea.

Of course, most people have a direct manager overseeing their work in the flying circus who knows more about it than the owner. But, ya know… "…then for the next 45 minutes I just space out", as the quote from Office Space goes. Seriously, it's bloody hard to tell if you're only working at half your ability – the only guy who knows is you. And this information asymmetry seems to me kind of symmetrical with that other asymmetry – employers being better at tracking the labor market, negotiating, etc. So, see the "Soviet" joke at the top of the page…

(Of course it's much better with an actual market for labor – I'm just saying that it's still far from perfect, and it'll get increasingly farther from "perfect" as the knowledge becomes more widely dispersed and you basically just have to trust experts at every step. An economy of dishonest experts – now that's a bloody nightmare. And by the way I actually think we now have enough newfangled technology together with ages-old fallibility to get there. AAA-rated MBSs is just the beginning.)

For a closing remark, I think it's fitting to mention all those bozos saying how "higher compensation does not increase motivation." (A management consultant saying something else might not get his management consulting gig, so I understand where they're coming from.)

To that I say: for me, at least, higher compensation leads to higher awareness of "spacing out" and such – call it "guilt" if you like. Pay me enough and I will think things like, "I'd really like to ignore this here shit, and it's SO ugly, and nobody will ever know. But, sheesh, they paid me all this money, just like they promised. And I sorta promised that I'll take care of the flying circus in return. And so it's seriously not OK to leave this here shit unattended. I better tell someone, and what's more, this here patch is definitely mine to clean, so here goes."

One danger in underpaying someone is it might prevent those thoughts from entering their minds. So, employers – caveat emptor. 'Cause goodness knows that the easiest way to raise one's compensation in the knowledge economy is to simply space out more while getting paid the same.


A "WTF is that sound" widget

A core function of an OS is dividing resources between apps: multiple windows per screen, files per disk, sockets per Internet connection, etc.

The machine's goddamned SPEAKER is one of those resources.

Now let's say you're working at your desktop. Your boss walks in for a chat. And now, midway through the conversation, your computer's goddamned speaker emits something like: "Major breakthrough reached at the G-20 meeting!", or "I came in like a wrecking ball!", or "Aaaah… ah! yeah, right there… AAAGH!!"

Your computer might have said it because you're watching news, or that other stuff, instead of working. Alternatively, perhaps some uninvited pop-up popped up an hour earlier - and then decided to self-activate at the worst possible moment.

Either way, it'd be nice to be able to shut the thing off quickly – say, right after "Hi, I'm Wendy" and before it gets to "yeah, right there." And not only that, but you might want to kill the app without giving focus to its window, because who knows what Wendy's up to in that window.

Now, closing a window or deleting a file is easy, because you have a visible handle to click on. Not so with sound, unfor-tu-nate-ly. Big mistake on behalf of OS designers, says I… BIG MISTAKE.

And that, folks, is the perfect use case for a "WTF is that sound" widget. I haven't figured it out down to an actual mockup or anything, but, ya know, it'd be a list of who the FUCK is using the GODDAMNED SPEAKER. "Who the fuck" might be a list of window titles – or maybe not, because you might not want a title like "Erm erm Wendy erm ahem". So it might be a list of app names without the window titles. Most importantly, if just ONE app is using the goddamned speaker, then there'd be just one red button that you press to KILL the fucking thing.

I'm hereby placing the idea in the public domain. Now go ahead and make {the world a better place, billions, an abandoned GitHub project implementing this dumb idea}.

P.S. I'm fine, thank you, but yes, the inspiration comes from real life, and no, it was neither Miley Cyrus nor Wendy, but a weird tune without words that ended as mysteriously as it commenced, and I still don't know what played it.

Update: no, it's not like a mute button. If we're seriously being serious about it, then pressing a mute button is like turning off the screen. When the boss walks in and there's a noise and you mute everything including the ambient music it's off, just like pressing the screen's off button would be off. More seriously, if you have umpteen apps and tabs and shit and something starts to emit sound and you don't know what that is, then a mute button doesn't help you, not if you want to listen to anything else at the same time which you might want to. So you need a "sound manager" just like you need a window manager. (Imagine having to guess which of the umpteen [x] buttons to close to make a particular window disappear. Sucks, right? Exactly the situation with sound, which, if it doesn't happen to be synced to video playing in some window, comes from fuck knows where – and even if it's synced to video, you must sift through windows to find that video! Why can't the window/tab/whatever at least have an indication of "sound is being emitted by the sucker who opened me"? Seriously, it's just dumb.)

A Sokoban levels design programming contest

I hate puzzles with a passion; I think of them as Gordian knots best untied with a sword, a machine gun or whatever else you can bring to bear on the problem.

The world of computer programmers, however – the world which I entered with the sole purpose of working for the highest bidder – is a world full of people who sincerely love puzzles. And if you visit this blog, perhaps you're one of these people.

If so, you might be pleased to learn about the recently launched Sokoban levels design contest, operated by gild – a great hacker, my long-time co-worker, an IOCCC winner, and a participant in Al Zimmerman's programming contests which he cites as inspiration for his own new contest.

The rules are precisely defined here; the general idea is to design Sokoban levels falling into different problem classes. Submitted levels are scored based on the length of the shortest solution (longer is better), and normalized s.t. the level taking the most steps to solve right now gets the score of 1. With 50 problem classes, the maximal overall score is 50. But with all the other cunning contestants submitting their own levels, your levels' score might be dropping every day or hour!

And I really mean every day or hour – even now at the very beginning there are several submissions per day. Judging by the rankings page, people spread around the globe are busy improving their Sokoban level-designing software and resubmitting better solutions. (Or they might be doing it in their heads; you don't need to submit any code, just the levels. I hear that occasionally a contestant using no software at all gets a rather good result in an Al Zimmerman's contest… What happens inside the heads of such people I don't know.)

There's also a discussion group, and if you're among the cunningest, most tenacious puzzle lovers who'll get to the top, you'll get – guess what? – a puzzle! Specifically, a gift card which you can use to buy, say, this Rubik's cube – or rather a Rubik's fuck-knows-what. I guess cubes are for sissies:

I personally think it's a bloody cool community/subculture to be in; a pity that I don't quite have the brains for it. (Could I hold my own if I really liked it? Or maybe I would really like it if I could hold my own? These are the types of questions it all brings to my mind.)

Anyway – if you're that sort of guy, it looks like a great thing for your brain to chew on. Good luck!

We're hiring

On behalf of easily the hottest computer company in Israel right now (self-driving cars, big noisy IPO, etc. etc.), I'm looking for people in the two areas I'm involved with:

  • software infrastructure / host tools
  • chips / hardware-software co-design

If you're a strong programmer, especially one who's also interested in math/vision/learning or low-level/hardware/optimization or both, you'll probably find enjoyable stuff to do. It looks like there's still plenty of room for growth, which usually means a lot of work to choose from. On the other hand, there's no risk of "wasting" your effort on a product which might never ship.

If you aren't a programmer but a hardware hacker (experienced in ASIC/FPGA or just straight out of school), we're hiring for these positions as well – I'll forward your CV to the relevant people.

People who work autonomously are a great fit, and they tend to like us - we gladly dial the extent of management down to near-zero levels when appropriate.

Also, I'd love to find people who tend to be at the center of things working with others – though there's always something in stock for people who'd rather spend time alone with a worthy problem.

Relevant experience – might be a plus of course, but not a must.

Send email to Yossi.Kreinin@gmail.com, and tell your friends. Seriously, one never knows and it's kinda awkward to sell vaguely described positions at a personal blog, but I think it can be a really nice opportunity.

Update: I should mention that we're in Jerusalem and while working from another location is not impossible, it's an extremely rare arrangement. So if you don't plan to relocate and send your resume nonetheless, I might have nothing to offer even though you sent a perfectly good one.

(It so happens that most resumes come from abroad. Blogging in Hebrew would make posts about open positions more effective, I guess; it turns out that people tend to mostly read in their mother tongue. I find it rather weird – I mean when it comes to computing where the terminology and even the code is in English, so writing about it in another language is invariably awkward. The fact remains, however, that most of my readers are from the US and the UK… Had the British Empire conquered four quarters of the world instead of just one, certainly language-wise life would have been simpler for us all.)

Capital vs labor: who risks more?

AFAIK, in the developed world, the income tax rate is often higher than the capital gains tax. In particular, income is taxed "progressively", meaning that higher income results in a higher rate – which doesn't always happen with capital gains. (It looks like it does happen in the US, but income appears to be taxed "more progressively".)

One justification for this is that investors risk losing much or all of their capital. Workers, on the other hand, are guaranteed their wages. To some extent, the difference in the tax rates compensates for the difference in the risks.

To me this is sensible at first glance but completely harebrained on second thought:

  • The worker's big risk is choosing the profession. Higher-income professions often require a more expensive education. If the demand for the skills drops in the future, the income might fail to cover the worker's investment into acquiring these skills. Assuming that learning ability drops with age, the risk increases proportionately.
  • Moreover, this risk is not diversified. Even a small-time investor can spread his risk using some sort of index fund, betting on many stocks at a time. For a worker, on the other hand, there's no conceivable way to be 25% lawyer, 25% carpenter, 25% programmer and 25% neurosurgeon.

The latter point may not interest utilitarian economists focused on GDP "the greatest good for the greatest number" – for society as a whole, risk is always diversified, even if Joe Shmoe is stuck with the crummiest profession that looked really promising when he chose it. Moreover, an efficient market would provide a way to insure against the risk of poorly choosing your occupation. (I don't think real-life markets did provide such a thing, though I might be mistaken.)

But the first point – that the risk exists, and that it's more likely than not to be proportionate to the projected income – by itself kinda erodes the ground underneath an income tax higher than the capital gains tax, not? I mean, regardless of the motivation, it seems internally inconsistent. Unless the risks are all quantified and it all magically cancels out.

It's not that I'm complaining – unlike most professionals, we programmers get stock options. (Speaking of which - central bankers of the world, The Programmers' Guild says thanks a bunch for QE and ZIRP! Remind us to send some flowers.) I just honestly don't get it.

Econ-savvy readers: what do I miss?

(Please don't accuse me of failing to use the Google. Maybe I did so badly, but I did try using the Google. The Google told me, for instance, that the optimal capital gains tax is zero, because "you can't transfer from capitalists to workers", because the transfer depletes the capital pool or something. But I didn't understand how it accounts for the case where the workers are also capitalists because they invest their savings; and what if there's a progressive tax on capital gains? Is it, or is it not possible to "transfer from capitalists to workers" the flesh and blood people, as opposed to abstract categories, regardless of whether you'd want to? Or if that's all nonsense – then is the optimal income tax also zero, because, hey, you're actually taxing human capital gains? Bringing us back a couple hundred years ago where we just have a sales tax? And again, not that I'd complain, I just don't quite follow the logic.)

(Also from the Google there's the claim that since nominal capital gains are taxed, the tax rate is actually higher than it looks because of inflation. But then the solution would be to tax inflation-adjusted gains, not to lower the rate arbitrarily, not? Also, wages are AFAIK the last set of prices to rise upon inflation – the "stickiest" – so workers' investment in their skills is continuously nibbled at by inflation as well. And the point made about "discouraging savings" – well, a high income tax discourages costly education and the subsequent ongoing investment in one's skills, which is a form of investment/savings/call it what you like. Same diff.)

I think it's pretty obvious that workers risk much more than investors. Whether this should lower their taxes I don't know, but it sort of makes sense to me.

Do company names actually matter?

This is a bit of a trite thought, but: Can it be that company names actually matter? Consider some examples:

  • Microsoft:  stands for "microprocessor software". Still the dominant software vendor for descendants of the original microprocessor. Never made commercially successful hardware.
  • Intel: stands for "integrated electronics" (chips.) Upon commoditization of DRAM, successfully pivoted to microprocessors – a harder-to-design, easier-to-differentiate kind of chip. Growth slowed when less "vertically integrated" competitors emerged, with both chip manufacturing (fabs) and circuit design/"IP" (CPUs, GPUs etc.) getting commoditized (TSMC + ARM is cheaper than Intel.) Had/has a consumer product division which was never successful.
  • Google: "a very large number", does "things that scale". After its original service, web search, had success with webmail and a smartphone OS. Not known for particularly attentive customer support (support doesn't scale.)
  • Amazon: "plenty", does "things that scale". Originally an online retail company, had success with e-book readers and "cloud infrastructure". Does everything it can so you never have to talk to support, which doesn't scale.
  • Samsung: "three stars", the world "three" representing "big, numerous and powerful". Indeed they are, in a wide variety of areas.
  • Facebook: a one-product company. Buys up other messaging, sharing and social networking companies so people can't flock elsewhere. Facebook phone tanked.
  • Twitter: another one-product company.
  • Apple: name stands for sweet stuff (say loyal users to befuddled me.) Used to be called "Apple Computer", renamed to plain "Apple" in 2007. Successfully incorporated device and chip design, server- and client-side software and retail into its business.
  • IBM: international business machines. A long-time near-monopoly in business computing, still going relatively strong in data storage systems, job schedulers and other corporate IT stuff. No considerable success in any consumer market despite the IBM PC being the first wildly successful computer for consumers (PC division eventually sold to Lenovo.)
  • The Walt Disney company: vigorously lobbies for copyright protection for characters created during Walt's days. Few characters created after Walt's death are nearly as successful; arguably the most successful ones came from acquiring Pixar, Marvel and Lucasfilm. Money spent on buying Pixar probably could have been saved if the company didn't fire John Lasseter but instead let him develop his ideas about computer animation. Would Walt have failed to see the potential?
  • Toyota: originally "Toyoda", a family name. Today's CEO is a founder's descendant. The founder's first name is not in the company's name. The company does not seem weakened by the founder's demise.

Some of my Google/Wikipedia-based "research" might be off. And I doubt that founding a company called "Huge Profits" would necessarily net me huge profits. However:

  1. The company name often reflects founders' vision.
  2. Such a vision doesn't change easily, especially if a company is successful – because success is convincing/addictive, and because an organization was by now built around that vision.
  3. Once the founders are gone, the vision becomes even harder to change because nobody has the authority and confidence. (Indeed one way in which organizations die, IMO, is through the continuous departure of people with authority that are not succeeded by anyone with comparable authority. Gradually, in more and more areas, things operate by inertia without adapting to changes.) Steve Jobs had to come back to Apple Computer to rename it to just "Apple".

So if you're the rare person capable of starting a successful company, and you insist on it being a huge success and not just a big one, make the vision as unconstrained as you can imagine.

P.S. For investment advice taking company names into account, I recommend Ian Lance Taylor's excellent essay titled Stock Prices.