Hiring (self-driving algos, HLL compiler research)

OK, so 2 things:

1. If you send me a CV and they're hired to work on self-driving algos – machine vision/learning/mapping/navigation, I'll pay you a shitton of money. (Details over email.) These teams want CS/math/physics/similar degree with great grades, and they want programming ability. They'll hire quite a lot of people.

2. The position below is for my team and if you refer a CV, I cannot pay you a shitton of money. But:

We're developing an array language that we want to efficiently compile to our in-house accelerators (multiple target architectures, you can think of it as "compiling to a DSP/GPU/FPGA.")

Of recent public efforts, perhaps Halide is the closest relative (we're compiling AOT instead of processing a graph of C++ objects constructed at run time, but I'm guessing the work done at the back-end is somewhat similar.) What we have now is already beating hand-optimized code in our C dialects on some programs, but it's still a "blue sky" effort in that we're not sure exactly how far it will go (in terms of the share of production programs where it can replace our C dialects.)

As usual, we aren't looking for someone with experience in exactly this sort of thing (here especially it'd be hopeless since there are few compiler writers and most of them work on lower-level languages.) Historically, the people who enjoy this kind of work have a background in what I broadly call (mislabel?) "discrete math" -  formal methods, theory of computation, board game AI, even cryptography, basically anywhere where you have clever algorithms in a discrete space that can be shown to work every time. (Heavyweight counter-examples missing one of "clever", "discrete" or "every time" – OSes, rendering, or NNs. This of course is not to say that experience in any of these is disqualifying, just that they're different.)

I think of it as a gig combining depth that people expect from academic work with compensation that people expect from industry work. If you're interested, email me (Yossi.Kreinin@gmail.com).

All positions are in Jerusalem.

Fun won't get it done

OK, published at 3:30 AM. That's a first!

So. Got something you want to do over the coarse of a year? Here's a motivation woefully insufficient to pull it off:

  • It's fun!

What could give you enough drive to finish the job? Anything with a reward in the future, once you're done:

  • Millions of fans will adore me.
  • It will be the ugliest thing on the planet.
  • I will finally understand quantum neural rockets.
  • We will see who the loser is, Todd!
  • I will help humanity.
  • I will destroy humanity.

It doesn't matter how noble or ignoble your goal is. What matters is delaying gratification. Because even your favorite thing in the world will have shitty bits if you chew on a big enough chunk of it. A few months or years worth of work are always a big enough chunk, so there will be shitty bits. Unfortunately, it's also the minimum-sized chunk to do anything of significance.

This is where many brilliant talents drown. Having known the joy of true inspiration, it's hard to settle for less, which you must to have any impact. Meanwhile, their thicker peers happily butcher task after task. Before you know it, these tasks add up to an impactful result.

In hindsight, I was really lucky in that I chose a profession for money instead of love. Why? Stamina. Money is a reward in the future that lets you ignore the shittier bits of the present.

Loving every moment of it, on the other hand, carries you until that moment which you hate, and then you need a new sort of fuel. Believe me, I know. I love drawing and animation, and you won't believe how many times I started and stopped doing it.

But the animation teacher who taught me 3D said he was happy to put textures on toilet seat models when he started out. That's the kind of appetite you need – and very few people naturally feel that sort of attraction to toilet seats. You need a big reward in the future, like "I'm going to become a pro," to pull it off.

But I don't want to become a pro. I don't want to work in the Israeli animation market where there's scarcely a feature film made. I don't even want to work for a big overseas animation studio. I want to make something, erm, something beautiful that I love, which is a piece of shit of a goal.

Because you know where I made most progress picking up actual skills? In an evening animation school, where I had a perfectly good goal: survive. It's good because it's a simple, binary thing which doesn't give a rat's ass about your mood. You either drop out or you don't. But "something I love" is fluid, and depends a lot on the mood. And when you hate this thing you're making, as you sometimes will, it's hard to imagine loving it later.

Conversely, imagining how I don't drop out is easy. This is what I was imagining when sculpting this bust, which 90% of the time I hated with a passion because it looked like crap. But I thought, "I'm not quitting, I'm not quitting, I'm not quitting, hey, I get the point of re-topology in Mudbox, I'm not quitting, I'm not quitting, hey, I guess I see what the specular map does, I'm not quitting… Guess I'm done!"

And now let's talk about beauty for a moment.

I'm a programmer. I like to think that I'm not the thickest, butcherest programmer, in that I understand the role of beauty in it. For the trained eye, programs can be beautiful as much as math, physics or chess, and a beautiful program is better for business than the needlessly uglier program. (Ever tried pitching the value of beauty to someone businessy? Loads of fun.)

But you know why beauty is your enemy? Because it sucks the fun out of things. How? Because you're making this thing and chances are, it's not beautiful according to your own standard. The trap is, your taste for beauty is usually ahead of your creative ability. In any area, and then in any sub-area of that area, ad infinitum, you can tell ugly from beautiful long before you can make something beautiful yourself. And even if you can satisfy your own taste, often the final thing is beautiful, but not the states it goes through.

So the passionate, sensitive soul is hit twice:

  1. You're driven by fun and inspiration because you've once experienced it and now you covet it.
  2. Your sense of beauty, frustrated by the state of your creation, kills all the fun – that very fun which you insist must be your only fuel.

Life is easier if you want a yacht. I think you can buy a decent one for $300K, and certainly for $1M. Now all you need to do is make that money, doing doesn't matter what – imagining that yacht will help you do anything well! If you want beauty, however, I do not envy you.

How do I cope with my desire for beauty? The first step is acknowledging the problem, which I do. The fact is that my worst failures in programming came when I insisted on beauty the most. The second step is shunning beauty as a goal, and making it into a means and a side-effect.

I need a program doing at least X, taking at most Y seconds, at a date not later than Z. I'll keep ugliness to a minimum because ugly programs work badly. And if it comes out particularly nicely, that's great. But beauty is not a goal, and enjoying the beauty of this program as I write it is not why I write it.

And if you think it's true for commercial work but not open source software, look at, I dunno, Linux. Read some Torvalds:

Realistically, every single release, most of it is just driver work. Which is kind of boring in the sense there is nothing fundamentally interesting in a driver, it's just support for yet another chipset or something, and at the same time that's kind of the bread and butter of the kernel. More than half of the kernel is just drivers, and so all the big exciting smart things we do, in the end it pales when compared to all the work we just do to support new hardware.

Boring bits. Boring bits that must be done to make something of value.

Does this transfer to art or poetry or any of those things whose whole point is beauty? Well, yeah, I think it does, because no, beauty is not the whole point:

  • The most important thing about a drawing is that it's done. Now it exists, and people can see it, and you can make another one. Practice. They will not come out very well if they don't come out.
  • Often people like your subject. There's a continuum between "it's beautiful in a way that words cannot convey" and "I love how this song expresses my favorite political philosophy." To the extent that a work of art tells a story, or even sets up a mood, its beauty does become a means to an end.
  • Just because the end result is beautiful to the observer, and even if that's the only point, doesn't mean every step making it was an orgy of beauty for whomever made it. Part of what goes into it is boring, technical work.

So here, too I'm trying to make beauty a non-goal. Instead my goals are "make a point" and "keep going," and you try to add beauty, or remove ugliness, as you go.

For example, I didn't do a graduation project in the evening school, but I animated a short on my own in the same timeframe, and I published it, even though it's not the beautiful thing I always dreamed about making. And I'm not sure anyone gets the joke except me. (I'm not sure I get it anymore, either.)

Now my goal is "make another one." It's a good goal, because it's easy to imagine making another one. It's proper delayed gratification.

And if you've enjoyed programming 20 years ago and are trying to reignite the passion, I suggest that you find a goal as worthy for you as "fun" or "beauty", but as clear and binary as a yacht. And you can settle for less worthy, but not for less clear and binary. Because everything they told you about "extrinsic motivation" being inferior to "intrinsic motivation" is one big lie. And this lie will fall apart the moment you sink your teeth into a bunch of shit, as will always happen if you're trying to accomplish anything.

Follow me on Twitter to receive pearls of wisdom such as the following sample:


The habitat of hardware bugs

I wrote a post on embeddedrelated.com about hardware bugs - places where they're rarely to be found, places which they inhabit in large quantities, and why these insects flourish in some places more than others.

It's one of these things that I wish I was told when I started to fiddle with this shit – that while a chip is monolithic from a manufacturing point of view, from the logical spec angle it's a hodgepodge of components made and sold by different parties with very different goals and habits, tied together well enough to be marketable, but not enough to make it coherent from a low-level programmer's point of view. In fact, it's the job of the few low-level programmers to hide the idiosyncratic and buggy parts so as to present a coherent picture to the many higher-level programmers - the ones whose mental well-being is an economically significant goal.


Looking for senior IT/DevOps people

I wouldn't spam you with these job offers if didn't work :-) So, we're looking for senior IT people to work at our Jerusalem offices – managers and hands-on people alike. We have rapid growth, "Big Data" (it definitely is crash Excel - in fact, at one point it was close to physically crashing through the floor due to the storage servers' weight, but luckily that's been handled), "HPC" (biggish server farms, distributed build & tests, etc.), and many other buzzwords [1]. I don't know where IT ends and DevOps starts but I guess a good candidate could have either in their CV, so there.

If you have qualified friends looking for a challenging, well-paying job at a fun place, send their CVs, the sooner the better – we're in a hurry (rapid growth!), so early birds are more likely to get the can of worms. As always, "challenging" is a downside as much as an upside (a place where IT means Exchange, SAP and little else might pay very well for a more predictable and less demanding job.)

We value experience in building and maintaining non-trivial systems, and technical reasoning (X happens because of Y, Z is most efficient if you use it to do W, etc.) We also value experience in higher-level areas such as management and purchasing, and business reasoning (don't hook X and Y together since their vendors compete and will sabotage the project, Z beats W in terms of total cost of ownership, etc.) We do kinda lean towards thinking of technical aptitude as a cornerstone on top of which solid higher-level expertise is built. (We've seen managers snowed by vendors, reports, etc., which is a perennial problem in tech at large and isn't restricted to IT.)

If you'd like to hear more details, please email Yossi.Kreinin@gmail.com

[1] what we don't have is a heavy-duty web site/application, which might make the position less relevant for some.

A layman's view of the economy

First of all, I proudly present a 2-minute short that I animated!

…And the same thing on YouTube, in case one loads better than the other:

One thing I learned making the film is that my Russian accent colors not only my words, but any noise coming out of my mouth. So I'm not the most versatile voice actor.

Anyway, we certainly have a debt crisis, and easy credit policies keep producing still more debt. I don't think interest rates have ever stayed so low for so long, everywhere.

Economists argue both for and against debt expansion [1], as they argue about everything.

My own take is as simple as my sparse knowledge ought to make it:

  • Unprecedented conditions produce unprecedented outcomes.
  • Booms are usually gradual, and busts are sudden.

No unusual boom has gradually arisen from unusual monetary policy, and it's been a while. But something unusual ought to happen in unusual conditions! Thus one expects a sudden, unusual bust down the road.

That's it. It's like a physicist's proof [2] that one's attractiveness peaks at some distance from the observer. At the distances of zero and infinity, visual attractiveness is zero (you can't see anything.) Therefore, attractiveness as a function of distance has a maximum somewhere in between. True, kinda, and it didn't take a lot of insight into the nature of attractiveness – much like my peak debt proof doesn't require an understanding of the economy [3].

Will today's "Brexit" trigger the global downturn predicted by Yossi Kreinin's Rule of Unprecedented Conditions? Probably not by itself. I think it's a symptom more than a cause [4], and the big bad thing comes later.

In the meanwhile, here's to hoping that my little film (started when "Grexit" was a thing, completed just in time for Brexit) was funnier than the average forward from grandma [5].

Happy Brexit! And if you follow people on Twitter, there's a strong case for following me as well.

[1] Bibliography: Nobody Understands Debt except Krugman; Does Krugman Understand Debt?

[2] I think a particular famous physicist said it, but I forget who.

[3] …and I can't say I have any understanding of the economy. That said, I've owed and paid off a lot of debt, and got to negotiate with many bankers. And I can tell you that "debt is money we owe to ourselves", Krugman's catchphrase, feels unconvincing to creditors – as many people and whole nations have found out.

[4] In fact, I just got an email from an asset manager saying that it's good for the UK in the longer run, elevating Brexit from a symptom to a cure. But he didn't say "good for everyone", and then I'm not sure his crystal ball is better than yours or mine.

[5] I linked to /r/forwardsfromgrandma since, regardless of the politics of either its members or their grandmas, I ought to give credit for the brilliant term – it's definitely funny because it's true. I've watched many relatives acquire the habit of forwarding various wingnut stuff as they age. Most frighteningly, my own urge to email such things gets harder to resist every year. I can sense my own ongoing grandmafication; between you and me, an animated short about debt might be a part of "it." Scary, scary stuff.

Evil tip: avoid "easy" things

Now you see that evil will always triumph, because good is dumb.

Dark Helmet

Evildoers live longer and feel better.


My writing has recently prompted an anonymous commenter to declare that people like me are what's wrong with the world. Oh joy! – finally, after all these years of doing evil, some recognition! Excited, I decided to share one of my battle-tested evil tips, which never ever failed evil me.

Don't work on "easy" things

An easy thing is a heads they win, tails you lose situation. Failing at easy things is shameful; succeeding is unremarkable. Much better to work on hard things – heads you win, tails they lose. Failure is unfortunate but expected; success makes you a hero.

Treat this seriously, because it snowballs. The guy working on the "hard" thing gets a lot of help and resources, making it easier to succeed – and easier to move on to the next "hard" thing. The guy doing the "easy" tasks gets no help, so it's harder to succeed.

Quotation marks all over the place, because of course what counts is perception, not how hard or easy it really is. The worst thing to work on is the hard one that's perceived as easy – the hard "easy" thing. The best thing is the easy "hard" one. My years-long preference for low-level programming results, in part, from its reputation of a very hard field, when in practice, it takes a little knowledge and a lot of discipline – but not any outstanding skills.

(Why then do many people fear low-level programming? Only because of how hard it bites you the first few times. People who felt that pain and recoiled respect those who've moved past it and reached productivity. Now you know why people take shit from the likes of Ken Thompson and Linus Torvalds, and then beg for more.)

The point where this gets really evil is not when a heroic doer of hard things decides to behave like a Torvalds. That's more stupid than evil. You'll get away with being a Torvalds, but it always costs some goodwill and hence, ultimately, money. So the goal-oriented evildoer always tries to be on their best behavior.

No, the point where this gets really evil is when you let them fail. When they come to you thinking that it's easy, and you know it's actually very hard, and you turn them down, and you let them fail a few times, and you wait until they come back with a readjusted attitude - that's evil.

Here, the evildoer needs to strike a delicate balance, keeping in mind The Evildoer's Golden Rule:

  • You can only sustain that much do-gooding; however,
  • Your environment can only take that much evildoing, and you need your environment.

Here's the rule applied to our situation:

  • Working on the hard "easy" thing – all trouble, no credit – is going to be terrible for you. You'll get a taste of a do-gooder's short, miserable life.
  • However, if this thing is so important that a failure would endanger the org, maybe you should be the do-gooder and save them from their misconceptions at your own expense. Maybe. And maybe not. Be sure to think about it.

The upshot is, sometimes the evildoer gets to be the do-gooder, but you should know that it's hazardous to your health.

Making easy things into hard ones: the postponing gambit

Sometimes you can't weasel out of doing something "easy." An interesting gambit for these cases is to postpone the easy thing until it becomes urgent. This accomplishes two things at a time:

  • Urgent things automatically become harder, a person in a hurry more important. The later it's done, the easier it is to get help (while retaining the status of "the" hero in the center of it all who made it happen.)
  • Under time pressure, the scope shrinks, making the formerly "easy" and now officially "hard" thing genuinely easier. This is particularly useful for the really disgusting, but unavoidable work.

But it is a gambit, because postponing things until they become urgent is openly evil. (Avoiding easy things is not – why, it's patriotic and heroic to look for the harder work!) To get away with postponing, you need an excuse:

  • other supposedly urgent work;
  • whoever needing this thing not having reminded you;
  • or even you having sincerely underestimated the difficulty and hence, regrettably, having postponed it too much – you're so sorry. (This last excuse has the drawback of you having to admit an error. But to the extent that urgency will make the scope smaller, the error will become smaller, too.)

One thing you want to prevent is people learning to remind you earlier. The way to accomplish it is being very nice when they come late. If people feel punished for reminding too late, they'll come earlier next time, and in a vengeful mood, so with more needless tasks. But if they're late and you eagerly "try to do the best under the circumstances", not only do you put yourself under the spotlight as a patriotic hero, you move the forgetful culprit out of the spotlight. So they'll form a rosy memory of the incident, and not learn the value of coming earlier – precisely what we want.

One thing making the postponing gambit relatively safe is that management is shocked by the very thought of people playing it, as can be seen in the following real-life conversation:

Babbling management consultant: A lot of organizations have a problem where they only work on urgent things at the expense of important, but less urgent ones.

Low-ranking evildoer manager (in a momentary lapse of reason): Why, of course! I actually postpone things to get priority around here.

Higher-ranking manager (in disbelief): You aren't serious, of course.

Low-ranking evildoer (apparently still out to lunch): I am.

Higher-ranking manager (firmly): I know you aren't.

Low-ranking evildoer finally shuts his mouth.

See? Sometimes they won't believe it if you say to their face. So they're unlikely to suspect you. (Do people reporting to me play the postponing gambit? Sometimes they do, and I don't resent them for it; their priorities aren't mine. But at the worst case, you should expect a lot of resentment – it's practically high treason – so you should have plausible deniability.)


To a very large extent, your productivity is a result of what you choose to work on. Keep things perceived as easy out of that list. When you can't, postponing an "easy" thing can make it both "harder" and smaller.

Happy evildoing, and follow me on Twitter!


Looking for a functional safety/ISO 26262 expert (anywhere on the globe)

Unlike most positions mentioned here, this one includes the possibility of working remotely (certainly from Europe and I think from elsewhere, too), with occasional visits to Jerusalem.

Functional safety experts with automotive experience are generally rare and in demand, meaning that

  • they're probably gainfully employed, and
  • I'm not counting on one of them reading this blog.

However, I imagine that a friend of a safety expert might be among my readers. If you're that reader, you can tell your friend the safety expert that we're very eager to hire them, and will go a long way to make an attractive proposition.

We particularly value experience at the chip/ASIC side of things (translating ISO 26262 requirements to actionable guidelines on hardening the design in question, together with a verification methodology and the required safety documentation.) We also value recommendations from designers who had to follow the expert's guidelines, as well as experience in presenting safety cases to customers.

Love thy coworker; thy work, not necessarily

The whole "passionate about work" attitude irks me out; save your passion for the bedroom. This is not to say that I'd rather be ridiculed for being a nerd who works too hard.

In fact, personally I'm in a strange position of a certified programming nerd with a blog and code on github, who nonetheless does it strictly for the money (I'd be a lawyer or an i-banker if I thought I could.) I'm thus on both sides of this, kinda.

So, in today's quest to change society through blogging, what am I asking society for, if neither passion nor scorn for work please me? Well, I'd rather society neither encourage nor discourage the love of work, and leave it to the individual's discretion.

From a moral angle, I base my belief on the Biblical commandment, "love thy neighbor", which I think does not dovetail into "love thy work" for a reason. From a practical angle, again I think that one's attitude to coworkers (also managers, customers and other people) is a better predictor of productivity than one's attitude to work.

People talk a lot about intrinsic vs extrinsic motivation – passion vs money - but I think they're actually very similar, and the more important distinction is personal vs social motivation.

Why? Because whether I work for fun or for the money, it's a means to my own personal end, which in itself precludes neither negligence nor fraud on my behalf. What makes you do the bits of work that are neither fun nor strictly necessary to get paid is that other people need it done, and you don't want to fail them.

Perhaps you disagree with my ideas on motivation. If so, here's an idea on boundaries that I hope is uncontroversial. Telling me how I should feel about my job infringes on my boundaries, which is to say that it's none of your business. If however I do a shoddy job and it becomes your problem, then I'm infringing on your boundaries, so you're absolutely entitled to complain about it. Here again requiring respect for coworkers is sensible, while requiring this or that attitude towards the work itself is not.


  • Someone's attitude towards work does not predict the quality of their work
  • Inquiring reports and potential hires about their attitude towards work is a minor but unpleasant harassment
  • A corporate culture of "we're doing this thing together" beats both "we're passionate to change the world by advancing the state of the art in blah blah" and "we're laser-focused on fulfilling customers' requirements on time and within budget"

P.S. Which kind of culture do managers typically want? Often they're schizophrenic on this. They want "passionate" workers, hoping that they'll accept less money. On the other hand, the same person often doesn't care about the actual work in the worst way (he sucks at it and not having to do it anymore is management's biggest perk to him.) But what he cares about is deadlines, etc. - so he encourages a culture of shipping shit in the hope that it sorts itself out somehow (these are the people that the term "technical debt" was invented for, of course nobody is convinced by this pseudo-businessy term if they weren't already convinced about the underlying idea of "shipping shit is bad.") Of course a truly passionate worker is going to suffer mightily in the kind of culture created by the same manager who thinks he wanted this worker.

The overblown frequency vs cost efficiency trade-off

I've often read arguments that computing circuitry running at a high frequency is inefficient, power-wise or silicon area-wise or both. So roughly, 100 MHz is more efficient, in that you get more work done per unit of energy or area spent. And CPUs go for 1 GHz or 3 GHz because serial performance sells regardless of efficiency. But accelerators like GPUs or embedded DSPs or ISPs or codecs implemented in hardware etc. etc. – these don't need to run at a high frequency.

And I think this argument is less common now when say GPUs have caught up, and an embedded GPU might run at the same frequency as an embedded CPU. But still, I've just seen someone peddling a "neuromorphic chip" or some such, and there it was – "you need to run conventional machines at 1 GHz and it's terribly inefficient."

AFAIK the real story here is pretty simple, namely:

  1. As you increase frequency, you GAIN efficiency up to point;
  2. From that point on, you do start LOSING efficiency;
  3. That inflection point, for well-designed circuits, is much higher than people think (close to a CPU's frequency in the given manufacturing process, certainly not 10x less as people often claim);
  4. …and what fueled the myth is, accelerator makers used to be much worse at designing for high frequency than CPU makers. So marketeers together with "underdog sympathizers" have overblown the frequency vs efficiency trade-off completely out of proportions.

And below I'll detail these points; if you notice oversimplifications, please correct me (there are many conflicting goals in circuit implementation, and these goals are different across markets, so my experience might be too narrow.)

Frequency improves efficiency up to a point

What's the cost of a circuit, and how is it affected by frequency? (This section shows the happy part of the answer – the sad part is in the next section.)

  1. Silicon area. The higher the clock frequency, the more things the same circuit occupying this area does per unit of time – so you win!
  2. Leakage power – just powering up the circuit and doing nothing, not even toggling the clock signal, costs you a certain amount of energy per unit of time. Here again, the higher the frequency, the more work gets done in exchange for the same leakage power – again you win!
  3. Switching power – every time the clock signal changes its value from 0 to 1 and back, this triggers a bunch of changes to the values of other signals as dictated by the interconnection of the logic gates, flip-flops – everything making up the circuit. All this switching from 0 to 1 and back costs energy (and NOT switching does not; measure the power dissipated by a loop multiplying zeros vs a loop multiplying random data, and you'll see what I mean. This has implications for the role of software in conserving energy, but this is outside our scope here.) What's the impact of frequency on cost here? It turns out that frequency is neutral - the cost in energy is directly proportionate to the clock frequency, but so is the amount of work done.

Overall, higher frequency means spending less area and power per unit of work – the opposite of the peanut gallery's conventional wisdom.

Frequency degrades efficiency from some point

At some point, however, higher frequency does start to increase the cost of the circuit per unit of work. The reasons boil down to having to build your circuit out of physically larger elements that leak more power. Even further down the frequency-chasing path come other problems, such as having to break down your work to many more pipeline stages, spending area and power on storage for the intermediate results of these stages; and needing expensive cooling solutions for heat dissipation. So actually there are several points along the road, with the cost of extra MHz growing at each point – until you reach the physically impossible frequency for a given manufacturing process.

How do you find the point where an extra MHz isn't worth it? For synthesizable design (one created in a high-level language like Verilog and VHDL), you can synthesize it for different frequencies and you can measure the cost in area and power, and plot the results. My confidence of where I think the inflection point should be comes from looking at these plots. Of course the plot will depend on the design, bringing us to the next point.

Better-designed circuits' optimal frequency is higher

One hard part of circuit design is, you're basically making a hugely parallel system, where many parts do different things. Each part doing the same thing would be easy – they all take the same time, duh, so no bottleneck. Conversely, each part doing something else makes it really easy to create a bottleneck – and really hard to balance the parts (it's hard to tell exactly how much time a piece of work takes without trying, and there are a lot of options you could try, each breaking the work into different parts.)

You need to break the harder things into smaller pipeline stages (yes, a cost in itself as we've just said – but usually a small cost unless you target really high frequencies and so have to break everything into umpteen stages.) Pipelining is hard to get right when the pipeline stages are not truly independent, and people often recoil from it (a hardware bug is on average more likely to be catastrophically costly than somewhat crummier performance.) Simpler designs also shorten schedules, which may be better than reaching a higher frequency later.

So CPUs competing for a huge market on serial performance and (stupidly) advertised frequency, implementing a comparatively stable instruction set, justified the effort to overcome these obstacles. (Sometimes to the detriment of consumers, arguably, as say with Pentium 4 – namely, high frequency, low serial performance due to too much pipelining.)

Accelerators are different. You can to some extent compensate for poor serial performance by throwing money at the problem - add more cores. Sometimes you don't care about extra performance – if you can decode video at the peak required rate and resolution, extra performance might not win more business. Between frequency improvements and architecture improvements/implementing a huge new standard, the latter might be more worthwhile. And then the budgets are generally smaller, so you tend to design more conservatively.

So AFAIK this is why so many embedded accelerators had crummy frequencies when they started out (and they also had apologists explaining why it was a good thing). And that's why some of the accelerators caught up – basically it was never a technical limitation but an economic problem of where to spend effort, and changing circumstances caused effort to be invested into improving frequency. And that's why if you're making an accelerator core which is 3 times slower than the CPU in the same chip, my first guess is your design isn't stellar at this stage, though it might improve – if it ever has to.

P.S. I'll say it again – my perspective can be skewed; someone with different experience might point out some oversimplifications. Different process nodes and different implementation constraints mean that what's decisive in one's experience is of marginal importance in another's experience. So please do correct me if I'm wrong in your experience.

P.P.S. Theoretically, a design running at 1 GHz might be doing the exact same amount of work as a 2 GHz design – if the pipeline is 2x shorter and each stage in the 1 GHz thing does the work of 2 stages in the 2 GHz thing. In practice, the 1 GHz design will have stages doing less work, so they complete in less than 1 nanosecond (1/1GHz) and are idle during much of the cycle. And this is why you want to invest some effort to up the frequency in that design – to not have mostly-idle circuitry leaking power and using up area. But the theoretically possible perfectly balanced 1 GHz design is a valid counter-argument to all of the above, I just don't think that's what most crummy frequencies hide behind them.

Update: here's an interesting complication – Norman Yarvin's comment points to an article about near-threshold voltage research by Intel, from which it turns out that a Pentium implementation designed to operate at near-threshold voltage (at a near-2x cost in area) achieves its best energy efficiency at 100 MHz – 10x slower than its peak frequency but spending 47x less energy. The trouble is, if you want that 10x performance back, you'd need 10 such cores for an overall area increase of 20x, in return for overall energy savings of 4.7x. Other points on the graph will be less extreme (less area spent, less energy saved.)

So this makes sense when silicon area is tremendously cheaper than energy, or when there's a hard limit on how much energy you can spend but a much laxer limit on area. This is not the case most of the time, AFAIK (silicon costs a lot and then it simply takes physical space, which also costs), but it can be the case some of the time. NTV can also make sense if voltage is adjusted dynamically based on workload, and you don't need high performance most of the time, and you don't care that your peak performance is achieved at a 2x area cost as much as you're happy to be able to conserve energy tremendously when not needing the performance.

Anyway, it goes to show that it's more complicated than I stated, even if I'm right for the average design made under today's typical constraints.

People can read their manager's mind

The fish rots from the head down.

– A beaten saying

People generally don't do what they're told, but what they expect to be rewarded for. Managers often say they'll reward something – perhaps they even believe it. But then they proceed to reward different things.

I think people are fairly good at predicting this discrepancy. The more productive they are, the better they tend to be at predicting it. Consequently, management's stated goals will tend to go unfulfilled whenever deep down, management doesn't value the sort of work that goes into achieving these goals.

So not only is paying lip service to these goals worthless, but so is lying to oneself and genuinely convincing oneself. When time comes to reward people, it is the gut feeling of whose work is truly remarkable that matters. And what you usually convince yourself of is that the goal is important – but not that achieving it is remarkable. In fact, often someone pursuing what you think are unimportant goals in a way that you admire will impress you more than someone doing "important grunt work" (in your eyes.)

You then live happily with this compartmentalization – an important goal to be achieved by unremarkable people. However, nobody is fooled except you. The people whose compensation depends on your opinion have ample time to remember and analyze your past words and decisions – more time than you, in fact, and a stronger incentive. And so their mental model of you is often much better than your own. So they ignore your requests and become valued, instead of following them and sinking into obscurity.


  • A manager truly appreciates original mathematical ideas. The manager requests to rid the code of crash-causing bugs, because customers resent crashes. The most confident people ignore him and spend time coming up with original math. The less confident people spend time chasing bugs, are upset by the lack of recognition, and eventually leave for greener pastures. At any given moment, the code base is ridden by crash-causing bugs.
  • A manager enjoys "software architecture", design patterns, and language lawyer type of knowledge. The manager requests to cooperate better with neighboring teams who are upset by missing functionality in the beautifully architected software. People will tend to keep designing more patterns into the program.
  • A highly influential figure enjoys hacking on their machine. The influential figure points out the importance of solid, highly-available infrastructure to support development. The department responsible for said infrastructure will guarantee that he gets as much bandwidth, RAM, screen pixels and other goodies as they can supply, knowing that the infrastructure he really cares about is that which enables the happy hacking on his machine. The rest of the org might well remain stuck with a turd of an infrastructure.
  • A manager loathes spending money. The manager requires to build highly-available infrastructure to support development. People responsible for infrastructure will build a piece of shit out of yesteryear's scraps purchased at nearby failing companies for peanuts, knowing that they'll be rewarded.
  • A manager is all about timely delivery, and he did very little code maintenance in his life. The manager nominally realizes that a lot of code is used in multiple shipping products; that it takes some time to make a change compatible with all the client code; and that branching the entire code base is a quick way to do the work for this delivery, but you'll pay for the shortcut many times over in each of your future deliveries. People will fork the code base for every shipping product. (I've seen it and heard about it more times than the luckier readers would believe.)

And so it goes. If something is rotten in an org, the root cause is a manager who doesn't value the work needed to fix it. They might value it being fixed, but of course no sane employee gives a shit about that. A sane employee cares whether they are valued. Three corollaries follow:

Corollary 1. Who can, and sometimes does, un-rot the fish from the bottom? An insane employee. Someone who finds the forks, crashes, etc. a personal offense, and will repeatedly risk annoying management by fighting to stop these things. Especially someone who spends their own political capital, hard earned doing things management truly values, on doing work they don't truly value – such a person can keep fighting for a long time. Some people manage to make a career out of it by persisting until management truly changes their mind and rewards them. Whatever the odds of that, the average person cannot comprehend the motivation of someone attempting such a feat.

Corollary 2. When does the fish un-rot from the top? When a manager is taught by experience that (1) neglecting this thing is harmful and (2) it's actually hard to get it right (that is, the manager himself, or someone he considers smart, tried and failed.) But that takes managers admitting mistakes and learning from them. Such managers exist; to be called one of them would exceed my dreams.

Corollary 3. Managers who can't make themselves value all important work should at least realize this: their goals do not automatically become their employees' goals. On the contrary, much or most of a manager's job is to align these goals – and if it were that easy, perhaps they wouldn't pay managers that much, now would they? I find it a blessing to be able to tell a manager, "you don't really value this work so it won't get done." In fact, it's a blessing even if they ignore me. That they can hear this sort of thing without exploding means they can be reasoned with. To be considered such a manger is the apex of my ambitions.

Finally, don't expect people to enlighten you and tell you what your blind spots are. Becoming a manager means losing the privilege of being told what's what. It's a trap to think of oneself as just the same reasonable guy and why wouldn't they want to talk to me. The right question is, why would they? Is the risk worth it for them? Only if they take your org's problem very personally, which most people quite sensibly don't. Someone telling me what's what is a thing to thank for, but not to count on.

The safe assumption is, they read your mind like an open book, and perhaps they read it out loud to each other – but not to you. The only way to deal with the problems I cause is an honest journey into the depths of my own rotten mind.

P.S. As it often happens, I wanted to write this for years (the working title was "people know their true KPIs"), but I didn't. I was prompted to finally write it by reading Dan Luu's excellent "How Completely Messed Up Practices Become Normal", where he says, among other things, "It’s sort of funny that this ends up being a problem about incentives. As an industry, we spend a lot of time thinking about how to incentivize consumers into doing what we want. But then we set up incentive systems that are generally agreed upon as incentivizing us to do the wrong things…" I guess this is my take on the incentives issue – real incentives vs stated incentives; I believe people often break rules put in place to achieve a stated goal in order to do the kind of work that is truly valued (even regardless of whether that work's goal is valued.) It's funny how I effectively comment on Dan's blog two times in a row, his blog having become easily my favorite "tech blog", while my own is kinda fading away as I spend my free time learning to animate.