February 27th, 2017 — wetware
For better or worse, things want to work.
Consider driving at night on unlit, curvy mountain roads, at a speed about twice the limit, zigzagging between cars, including oncoming ones. Obviously dangerous, and yet many do this, and survive. How?
- Roads and cars are built with big safety margins
- Other drivers don't want to die and help you get through
- Practice makes perfect, so you get good at this bad thing
The road, the car, you, other drivers, and their cars all want this to work. So for a long while, it does, until it finally doesn't. I know 3-4 people who drive like this habitually. At least 2 of them totaled cars. All think they're excellent drivers. All have high IQs, making you wonder just what this renowned benchmark of human brains really tells us.
Now consider a terribly managed project with an insane deadline, and a team and budget too small. All too often, this too works out. How?
- Unless it physically cannot exist, a solution wants you to find it. You carve out a piece and the next piece suggests itself. Even if management fails to think how the pieces fit together, the pieces often come out such that they can be made to fit with modest extra effort.
- And then the people who make the pieces want them to fit. Even if the process is totally mismanaged, many people will talk to each other and find out what to do to make parts work together.
- The project was approved because a customer was persuaded. At this point, the customer wants the project to succeed. A little bit of schedule slippage will not make them change their minds, nor will a somewhat less impressive result. More slack for you.
- The vendor, too wants the project to succeed, and will tolerate a little bit of budget overrun. More slack.
- Most often, when things fail, they fail visibly. It's as if things wanted you to see that they fail, so that you fix them.
The fact is that by cutting features, having a few non-terminal bugs, and being somewhat late and over budget, most projects can be salvaged. In fact, when they say that "most projects fail," the PMI (*) defines "failure" as being a bit late or over budget. If "failure" is defined as outright cancellation, I conjecture that most projects "succeed."
Which projects are least likely to be canceled? In other words, where is being late, over budget and off the original spec most tolerable? Obviously, when the overall delivered value is the highest, both in absolute terms and relatively to the cost. In other words, reality punishes bad management the least in the most impactful cases.
What is the biggest problem with bad management? Same as crazy driving: risk. The problem in both cases is you risk high-cost, low-probability events. It's terrible things that tend not to happen. And people are pretty bad at learning from mistakes they never had to pay for.
Wannabe racecar drivers fail to learn from driving into risky situations which their own eyes tell them are risky. For managers, learning is harder – the risks accumulated through bad management are abstract, instead of viscerally scary. In fact, a lot of the risks are never understood by management, or even fully reported. There's just too much risk to sweep under various rugs to make it all ingrained in institutional memory.
In fact, it's even worse, because risk-taking is actually rewarding as long as the downside doesn't materialize. The crazy driver gets there 10 minutes earlier. Similarly, non-obviously hazardous management often delivers at an obviously small cost. And while driving is not actually competitive, except in the inflamed minds of the zigzagging few, most projects are delivered in very competitive environments indeed. And competition can make even small rewards for risk decisive – as it can with any other smallish factor large enough to make a difference between victory and defeat.
Things want to work more than they want to punish us for our errors. The punishment may be very cruel and unusual alright, but it's rare. It seems that the universe, at least The Universe of Deliverables, is Beckerian. It delivers optimal punishment for rational agents correctly estimating probabilities. Sadly, humans are bad at probability.
And thus crazy drivers and bad managers alike (often the same people, BTW) march from one insane adventure to the next, gaining more and more confidence in their brilliance.
(*) PMI (The Project Management Institute) is a con, where they sell you "PMBOK" (Project Management Body of Knowledge, a thick book you can use as a monitor stand) and "PMP" (Project Management Professional, a certification required by PMI's conscious or unwitting accomplices in dark corners of the industry.) A variety of more elaborate cons targeted at narrower audiences incorporate PMI's core body of cargo cult practices.
September 11th, 2016 — OT
OK, so 2 things:
1. If you send me a CV and they're hired to work on self-driving algos – machine vision/learning/mapping/navigation, I'll pay you a shitton of money. (Details over email.) These teams want CS/math/physics/similar degree with great grades, and they want programming ability. They'll hire quite a lot of people.
2. The position below is for my team and if you refer a CV, I cannot pay you a shitton of money. But:
We're developing an array language that we want to efficiently compile to our in-house accelerators (multiple target architectures, you can think of it as "compiling to a DSP/GPU/FPGA.")
Of recent public efforts, perhaps Halide is the closest relative (we're compiling AOT instead of processing a graph of C++ objects constructed at run time, but I'm guessing the work done at the back-end is somewhat similar.) What we have now is already beating hand-optimized code in our C dialects on some programs, but it's still a "blue sky" effort in that we're not sure exactly how far it will go (in terms of the share of production programs where it can replace our C dialects.)
As usual, we aren't looking for someone with experience in exactly this sort of thing (here especially it'd be hopeless since there are few compiler writers and most of them work on lower-level languages.) Historically, the people who enjoy this kind of work have a background in what I broadly call (mislabel?) "discrete math" - formal methods, theory of computation, board game AI, even cryptography, basically anywhere where you have clever algorithms in a discrete space that can be shown to work every time. (Heavyweight counter-examples missing one of "clever", "discrete" or "every time" – OSes, rendering, or NNs. This of course is not to say that experience in any of these is disqualifying, just that they're different.)
I think of it as a gig combining depth that people expect from academic work with compensation that people expect from industry work. If you're interested, email me (Yossi.Kreinin@gmail.com).
All positions are in Jerusalem.
August 1st, 2016 — wetware
OK, published at 3:30 AM. That's a first!
So. Got something you want to do over the coarse of a year? Here's a motivation woefully insufficient to pull it off:
What could give you enough drive to finish the job? Anything with a reward in the future, once you're done:
- Millions of fans will adore me.
- It will be the ugliest thing on the planet.
- I will finally understand quantum neural rockets.
- We will see who the loser is, Todd!
- I will help humanity.
- I will destroy humanity.
It doesn't matter how noble or ignoble your goal is. What matters is delaying gratification. Because even your favorite thing in the world will have shitty bits if you chew on a big enough chunk of it. A few months or years worth of work are always a big enough chunk, so there will be shitty bits. Unfortunately, it's also the minimum-sized chunk to do anything of significance.
This is where many brilliant talents drown. Having known the joy of true inspiration, it's hard to settle for less, which you must to have any impact. Meanwhile, their thicker peers happily butcher task after task. Before you know it, these tasks add up to an impactful result.
In hindsight, I was really lucky in that I chose a profession for money instead of love. Why? Stamina. Money is a reward in the future that lets you ignore the shittier bits of the present.
Loving every moment of it, on the other hand, carries you until that moment which you hate, and then you need a new sort of fuel. Believe me, I know. I love drawing and animation, and you won't believe how many times I started and stopped doing it.
But the animation teacher who taught me 3D said he was happy to put textures on toilet seat models when he started out. That's the kind of appetite you need – and very few people naturally feel that sort of attraction to toilet seats. You need a big reward in the future, like "I'm going to become a pro," to pull it off.
But I don't want to become a pro. I don't want to work in the Israeli animation market where there's scarcely a feature film made. I don't even want to work for a big overseas animation studio. I want to make something, erm, something beautiful that I love, which is a piece of shit of a goal.
Because you know where I made most progress picking up actual skills? In an evening animation school, where I had a perfectly good goal: survive. It's good because it's a simple, binary thing which doesn't give a rat's ass about your mood. You either drop out or you don't. But "something I love" is fluid, and depends a lot on the mood. And when you hate this thing you're making, as you sometimes will, it's hard to imagine loving it later.
Conversely, imagining how I don't drop out is easy. This is what I was imagining when sculpting this bust, which 90% of the time I hated with a passion because it looked like crap. But I thought, "I'm not quitting, I'm not quitting, I'm not quitting, hey, I get the point of re-topology in Mudbox, I'm not quitting, I'm not quitting, hey, I guess I see what the specular map does, I'm not quitting… Guess I'm done!"
And now let's talk about beauty for a moment.
I'm a programmer. I like to think that I'm not the thickest, butcherest programmer, in that I understand the role of beauty in it. For the trained eye, programs can be beautiful as much as math, physics or chess, and a beautiful program is better for business than the needlessly uglier program. (Ever tried pitching the value of beauty to someone businessy? Loads of fun.)
But you know why beauty is your enemy? Because it sucks the fun out of things. How? Because you're making this thing and chances are, it's not beautiful according to your own standard. The trap is, your taste for beauty is usually ahead of your creative ability. In any area, and then in any sub-area of that area, ad infinitum, you can tell ugly from beautiful long before you can make something beautiful yourself. And even if you can satisfy your own taste, often the final thing is beautiful, but not the states it goes through.
So the passionate, sensitive soul is hit twice:
- You're driven by fun and inspiration because you've once experienced it and now you covet it.
- Your sense of beauty, frustrated by the state of your creation, kills all the fun – that very fun which you insist must be your only fuel.
Life is easier if you want a yacht. I think you can buy a decent one for $300K, and certainly for $1M. Now all you need to do is make that money, doing doesn't matter what – imagining that yacht will help you do anything well! If you want beauty, however, I do not envy you.
How do I cope with my desire for beauty? The first step is acknowledging the problem, which I do. The fact is that my worst failures in programming came when I insisted on beauty the most. The second step is shunning beauty as a goal, and making it into a means and a side-effect.
I need a program doing at least X, taking at most Y seconds, at a date not later than Z. I'll keep ugliness to a minimum because ugly programs work badly. And if it comes out particularly nicely, that's great. But beauty is not a goal, and enjoying the beauty of this program as I write it is not why I write it.
And if you think it's true for commercial work but not open source software, look at, I dunno, Linux. Read some Torvalds:
Realistically, every single release, most of it is just driver work. Which is kind of boring in the sense there is nothing fundamentally interesting in a driver, it's just support for yet another chipset or something, and at the same time that's kind of the bread and butter of the kernel. More than half of the kernel is just drivers, and so all the big exciting smart things we do, in the end it pales when compared to all the work we just do to support new hardware.
Boring bits. Boring bits that must be done to make something of value.
Does this transfer to art or poetry or any of those things whose whole point is beauty? Well, yeah, I think it does, because no, beauty is not the whole point:
- The most important thing about a drawing is that it's done. Now it exists, and people can see it, and you can make another one. Practice. They will not come out very well if they don't come out.
- Often people like your subject. There's a continuum between "it's beautiful in a way that words cannot convey" and "I love how this song expresses my favorite political philosophy." To the extent that a work of art tells a story, or even sets up a mood, its beauty does become a means to an end.
- Just because the end result is beautiful to the observer, and even if that's the only point, doesn't mean every step making it was an orgy of beauty for whomever made it. Part of what goes into it is boring, technical work.
So here, too I'm trying to make beauty a non-goal. Instead my goals are "make a point" and "keep going," and you try to add beauty, or remove ugliness, as you go.
For example, I didn't do a graduation project in the evening school, but I animated a short on my own in the same timeframe, and I published it, even though it's not the beautiful thing I always dreamed about making. And I'm not sure anyone gets the joke except me. (I'm not sure I get it anymore, either.)
Now my goal is "make another one." It's a good goal, because it's easy to imagine making another one. It's proper delayed gratification.
And if you've enjoyed programming 20 years ago and are trying to reignite the passion, I suggest that you find a goal as worthy for you as "fun" or "beauty", but as clear and binary as a yacht. And you can settle for less worthy, but not for less clear and binary. Because everything they told you about "extrinsic motivation" being inferior to "intrinsic motivation" is one big lie. And this lie will fall apart the moment you sink your teeth into a bunch of shit, as will always happen if you're trying to accomplish anything.
Follow me on Twitter to receive pearls of wisdom such as the following sample:
July 13th, 2016 — hardware
I wrote a post on embeddedrelated.com about hardware bugs - places where they're rarely to be found, places which they inhabit in large quantities, and why these insects flourish in some places more than others.
It's one of these things that I wish I was told when I started to fiddle with this shit – that while a chip is monolithic from a manufacturing point of view, from the logical spec angle it's a hodgepodge of components made and sold by different parties with very different goals and habits, tied together well enough to be marketable, but not enough to make it coherent from a low-level programmer's point of view. In fact, it's the job of the few low-level programmers to hide the idiosyncratic and buggy parts so as to present a coherent picture to the many higher-level programmers - the ones whose mental well-being is an economically significant goal.
June 30th, 2016 — OT
I wouldn't spam you with these job offers if didn't work :-) So, we're looking for senior IT people to work at our Jerusalem offices – managers and hands-on people alike. We have rapid growth, "Big Data" (it definitely is crash Excel - in fact, at one point it was close to physically crashing through the floor due to the storage servers' weight, but luckily that's been handled), "HPC" (biggish server farms, distributed build & tests, etc.), and many other buzzwords . I don't know where IT ends and DevOps starts but I guess a good candidate could have either in their CV, so there.
If you have qualified friends looking for a challenging, well-paying job at a fun place, send their CVs, the sooner the better – we're in a hurry (rapid growth!), so early birds are more likely to get the can of worms. As always, "challenging" is a downside as much as an upside (a place where IT means Exchange, SAP and little else might pay very well for a more predictable and less demanding job.)
We value experience in building and maintaining non-trivial systems, and technical reasoning (X happens because of Y, Z is most efficient if you use it to do W, etc.) We also value experience in higher-level areas such as management and purchasing, and business reasoning (don't hook X and Y together since their vendors compete and will sabotage the project, Z beats W in terms of total cost of ownership, etc.) We do kinda lean towards thinking of technical aptitude as a cornerstone on top of which solid higher-level expertise is built. (We've seen managers snowed by vendors, reports, etc., which is a perennial problem in tech at large and isn't restricted to IT.)
If you'd like to hear more details, please email Yossi.Kreinin@gmail.com
 what we don't have is a heavy-duty web site/application, which might make the position less relevant for some.
June 24th, 2016 — animation
First of all, I proudly present a 2-minute short that I animated!
…And the same thing on YouTube, in case one loads better than the other:
One thing I learned making the film is that my Russian accent colors not only my words, but any noise coming out of my mouth. So I'm not the most versatile voice actor.
Anyway, we certainly have a debt crisis, and easy credit policies keep producing still more debt. I don't think interest rates have ever stayed so low for so long, everywhere.
Economists argue both for and against debt expansion , as they argue about everything.
My own take is as simple as my sparse knowledge ought to make it:
- Unprecedented conditions produce unprecedented outcomes.
- Booms are usually gradual, and busts are sudden.
No unusual boom has gradually arisen from unusual monetary policy, and it's been a while. But something unusual ought to happen in unusual conditions! Thus one expects a sudden, unusual bust down the road.
That's it. It's like a physicist's proof  that one's attractiveness peaks at some distance from the observer. At the distances of zero and infinity, visual attractiveness is zero (you can't see anything.) Therefore, attractiveness as a function of distance has a maximum somewhere in between. True, kinda, and it didn't take a lot of insight into the nature of attractiveness – much like my peak debt proof doesn't require an understanding of the economy .
Will today's "Brexit" trigger the global downturn predicted by Yossi Kreinin's Rule of Unprecedented Conditions? Probably not by itself. I think it's a symptom more than a cause , and the big bad thing comes later.
In the meanwhile, here's to hoping that my little film (started when "Grexit" was a thing, completed just in time for Brexit) was funnier than the average forward from grandma .
Happy Brexit! And if you follow people on Twitter, there's a strong case for following me as well.
 Bibliography: Nobody Understands Debt except Krugman; Does Krugman Understand Debt?
 I think a particular famous physicist said it, but I forget who.
 …and I can't say I have any understanding of the economy. That said, I've owed and paid off a lot of debt, and got to negotiate with many bankers. And I can tell you that "debt is money we owe to ourselves", Krugman's catchphrase, feels unconvincing to creditors – as many people and whole nations have found out.
 In fact, I just got an email from an asset manager saying that it's good for the UK in the longer run, elevating Brexit from a symptom to a cure. But he didn't say "good for everyone", and then I'm not sure his crystal ball is better than yours or mine.
 I linked to /r/forwardsfromgrandma since, regardless of the politics of either its members or their grandmas, I ought to give credit for the brilliant term – it's definitely funny because it's true. I've watched many relatives acquire the habit of forwarding various wingnut stuff as they age. Most frighteningly, my own urge to email such things gets harder to resist every year. I can sense my own ongoing grandmafication; between you and me, an animated short about debt might be a part of "it." Scary, scary stuff.
June 1st, 2016 — wetware
Now you see that evil will always triumph, because good is dumb.
– Dark Helmet
Evildoers live longer and feel better.
My writing has recently prompted an anonymous commenter to declare that people like me are what's wrong with the world. Oh joy! – finally, after all these years of doing evil, some recognition! Excited, I decided to share one of my battle-tested evil tips, which never ever failed evil me.
Don't work on "easy" things
An easy thing is a heads they win, tails you lose situation. Failing at easy things is shameful; succeeding is unremarkable. Much better to work on hard things – heads you win, tails they lose. Failure is unfortunate but expected; success makes you a hero.
Treat this seriously, because it snowballs. The guy working on the "hard" thing gets a lot of help and resources, making it easier to succeed – and easier to move on to the next "hard" thing. The guy doing the "easy" tasks gets no help, so it's harder to succeed.
Quotation marks all over the place, because of course what counts is perception, not how hard or easy it really is. The worst thing to work on is the hard one that's perceived as easy – the hard "easy" thing. The best thing is the easy "hard" one. My years-long preference for low-level programming results, in part, from its reputation of a very hard field, when in practice, it takes a little knowledge and a lot of discipline – but not any outstanding skills.
(Why then do many people fear low-level programming? Only because of how hard it bites you the first few times. People who felt that pain and recoiled respect those who've moved past it and reached productivity. Now you know why people take shit from the likes of Ken Thompson and Linus Torvalds, and then beg for more.)
The point where this gets really evil is not when a heroic doer of hard things decides to behave like a Torvalds. That's more stupid than evil. You'll get away with being a Torvalds, but it always costs some goodwill and hence, ultimately, money. So the goal-oriented evildoer always tries to be on their best behavior.
No, the point where this gets really evil is when you let them fail. When they come to you thinking that it's easy, and you know it's actually very hard, and you turn them down, and you let them fail a few times, and you wait until they come back with a readjusted attitude - that's evil.
Here, the evildoer needs to strike a delicate balance, keeping in mind The Evildoer's Golden Rule:
- You can only sustain that much do-gooding; however,
- Your environment can only take that much evildoing, and you need your environment.
Here's the rule applied to our situation:
- Working on the hard "easy" thing – all trouble, no credit – is going to be terrible for you. You'll get a taste of a do-gooder's short, miserable life.
- However, if this thing is so important that a failure would endanger the org, maybe you should be the do-gooder and save them from their misconceptions at your own expense. Maybe. And maybe not. Be sure to think about it.
The upshot is, sometimes the evildoer gets to be the do-gooder, but you should know that it's hazardous to your health.
Making easy things into hard ones: the postponing gambit
Sometimes you can't weasel out of doing something "easy." An interesting gambit for these cases is to postpone the easy thing until it becomes urgent. This accomplishes two things at a time:
- Urgent things automatically become harder, a person in a hurry more important. The later it's done, the easier it is to get help (while retaining the status of "the" hero in the center of it all who made it happen.)
- Under time pressure, the scope shrinks, making the formerly "easy" and now officially "hard" thing genuinely easier. This is particularly useful for the really disgusting, but unavoidable work.
But it is a gambit, because postponing things until they become urgent is openly evil. (Avoiding easy things is not – why, it's patriotic and heroic to look for the harder work!) To get away with postponing, you need an excuse:
- other supposedly urgent work;
- whoever needing this thing not having reminded you;
- or even you having sincerely underestimated the difficulty and hence, regrettably, having postponed it too much – you're so sorry. (This last excuse has the drawback of you having to admit an error. But to the extent that urgency will make the scope smaller, the error will become smaller, too.)
One thing you want to prevent is people learning to remind you earlier. The way to accomplish it is being very nice when they come late. If people feel punished for reminding too late, they'll come earlier next time, and in a vengeful mood, so with more needless tasks. But if they're late and you eagerly "try to do the best under the circumstances", not only do you put yourself under the spotlight as a patriotic hero, you move the forgetful culprit out of the spotlight. So they'll form a rosy memory of the incident, and not learn the value of coming earlier – precisely what we want.
One thing making the postponing gambit relatively safe is that management is shocked by the very thought of people playing it, as can be seen in the following real-life conversation:
Babbling management consultant: A lot of organizations have a problem where they only work on urgent things at the expense of important, but less urgent ones.
Low-ranking evildoer manager (in a momentary lapse of reason): Why, of course! I actually postpone things to get priority around here.
Higher-ranking manager (in disbelief): You aren't serious, of course.
Low-ranking evildoer (apparently still out to lunch): I am.
Higher-ranking manager (firmly): I know you aren't.
Low-ranking evildoer finally shuts his mouth.
See? Sometimes they won't believe it if you say to their face. So they're unlikely to suspect you. (Do people reporting to me play the postponing gambit? Sometimes they do, and I don't resent them for it; their priorities aren't mine. But at the worst case, you should expect a lot of resentment – it's practically high treason – so you should have plausible deniability.)
To a very large extent, your productivity is a result of what you choose to work on. Keep things perceived as easy out of that list. When you can't, postponing an "easy" thing can make it both "harder" and smaller.
Happy evildoing, and follow me on Twitter!
May 19th, 2016 — OT
Unlike most positions mentioned here, this one includes the possibility of working remotely (certainly from Europe and I think from elsewhere, too), with occasional visits to Jerusalem.
Functional safety experts with automotive experience are generally rare and in demand, meaning that
- they're probably gainfully employed, and
- I'm not counting on one of them reading this blog.
However, I imagine that a friend of a safety expert might be among my readers. If you're that reader, you can tell your friend the safety expert that we're very eager to hire them, and will go a long way to make an attractive proposition.
We particularly value experience at the chip/ASIC side of things (translating ISO 26262 requirements to actionable guidelines on hardening the design in question, together with a verification methodology and the required safety documentation.) We also value recommendations from designers who had to follow the expert's guidelines, as well as experience in presenting safety cases to customers.
May 13th, 2016 — wetware
The whole "passionate about work" attitude irks me out; save your passion for the bedroom. This is not to say that I'd rather be ridiculed for being a nerd who works too hard.
In fact, personally I'm in a strange position of a certified programming nerd with a blog and code on github, who nonetheless does it strictly for the money (I'd be a lawyer or an i-banker if I thought I could.) I'm thus on both sides of this, kinda.
So, in today's quest to change society through blogging, what am I asking society for, if neither passion nor scorn for work please me? Well, I'd rather society neither encourage nor discourage the love of work, and leave it to the individual's discretion.
From a moral angle, I base my belief on the Biblical commandment, "love thy neighbor", which I think does not dovetail into "love thy work" for a reason. From a practical angle, again I think that one's attitude to coworkers (also managers, customers and other people) is a better predictor of productivity than one's attitude to work.
People talk a lot about intrinsic vs extrinsic motivation – passion vs money - but I think they're actually very similar, and the more important distinction is personal vs social motivation.
Why? Because whether I work for fun or for the money, it's a means to my own personal end, which in itself precludes neither negligence nor fraud on my behalf. What makes you do the bits of work that are neither fun nor strictly necessary to get paid is that other people need it done, and you don't want to fail them.
Perhaps you disagree with my ideas on motivation. If so, here's an idea on boundaries that I hope is uncontroversial. Telling me how I should feel about my job infringes on my boundaries, which is to say that it's none of your business. If however I do a shoddy job and it becomes your problem, then I'm infringing on your boundaries, so you're absolutely entitled to complain about it. Here again requiring respect for coworkers is sensible, while requiring this or that attitude towards the work itself is not.
- Someone's attitude towards work does not predict the quality of their work
- Inquiring reports and potential hires about their attitude towards work is a minor but unpleasant harassment
- A corporate culture of "we're doing this thing together" beats both "we're passionate to change the world by advancing the state of the art in blah blah" and "we're laser-focused on fulfilling customers' requirements on time and within budget"
P.S. Which kind of culture do managers typically want? Often they're schizophrenic on this. They want "passionate" workers, hoping that they'll accept less money. On the other hand, the same person often doesn't care about the actual work in the worst way (he sucks at it and not having to do it anymore is management's biggest perk to him.) But what he cares about is deadlines, etc. - so he encourages a culture of shipping shit in the hope that it sorts itself out somehow (these are the people that the term "technical debt" was invented for, of course nobody is convinced by this pseudo-businessy term if they weren't already convinced about the underlying idea of "shipping shit is bad.") Of course a truly passionate worker is going to suffer mightily in the kind of culture created by the same manager who thinks he wanted this worker.
January 31st, 2016 — hardware
I've often read arguments that computing circuitry running at a high frequency is inefficient, power-wise or silicon area-wise or both. So roughly, 100 MHz is more efficient, in that you get more work done per unit of energy or area spent. And CPUs go for 1 GHz or 3 GHz because serial performance sells regardless of efficiency. But accelerators like GPUs or embedded DSPs or ISPs or codecs implemented in hardware etc. etc. – these don't need to run at a high frequency.
And I think this argument is less common now when say GPUs have caught up, and an embedded GPU might run at the same frequency as an embedded CPU. But still, I've just seen someone peddling a "neuromorphic chip" or some such, and there it was – "you need to run conventional machines at 1 GHz and it's terribly inefficient."
AFAIK the real story here is pretty simple, namely:
- As you increase frequency, you GAIN efficiency up to point;
- From that point on, you do start LOSING efficiency;
- That inflection point, for well-designed circuits, is much higher than people think (close to a CPU's frequency in the given manufacturing process, certainly not 10x less as people often claim);
- …and what fueled the myth is, accelerator makers used to be much worse at designing for high frequency than CPU makers. So marketeers together with "underdog sympathizers" have overblown the frequency vs efficiency trade-off completely out of proportions.
And below I'll detail these points; if you notice oversimplifications, please correct me (there are many conflicting goals in circuit implementation, and these goals are different across markets, so my experience might be too narrow.)
Frequency improves efficiency up to a point
What's the cost of a circuit, and how is it affected by frequency? (This section shows the happy part of the answer – the sad part is in the next section.)
- Silicon area. The higher the clock frequency, the more things the same circuit occupying this area does per unit of time – so you win!
- Leakage power – just powering up the circuit and doing nothing, not even toggling the clock signal, costs you a certain amount of energy per unit of time. Here again, the higher the frequency, the more work gets done in exchange for the same leakage power – again you win!
- Switching power – every time the clock signal changes its value from 0 to 1 and back, this triggers a bunch of changes to the values of other signals as dictated by the interconnection of the logic gates, flip-flops – everything making up the circuit. All this switching from 0 to 1 and back costs energy (and NOT switching does not; measure the power dissipated by a loop multiplying zeros vs a loop multiplying random data, and you'll see what I mean. This has implications for the role of software in conserving energy, but this is outside our scope here.) What's the impact of frequency on cost here? It turns out that frequency is neutral - the cost in energy is directly proportionate to the clock frequency, but so is the amount of work done.
Overall, higher frequency means spending less area and power per unit of work – the opposite of the peanut gallery's conventional wisdom.
Frequency degrades efficiency from some point
At some point, however, higher frequency does start to increase the cost of the circuit per unit of work. The reasons boil down to having to build your circuit out of physically larger elements that leak more power. Even further down the frequency-chasing path come other problems, such as having to break down your work to many more pipeline stages, spending area and power on storage for the intermediate results of these stages; and needing expensive cooling solutions for heat dissipation. So actually there are several points along the road, with the cost of extra MHz growing at each point – until you reach the physically impossible frequency for a given manufacturing process.
How do you find the point where an extra MHz isn't worth it? For synthesizable design (one created in a high-level language like Verilog and VHDL), you can synthesize it for different frequencies and you can measure the cost in area and power, and plot the results. My confidence of where I think the inflection point should be comes from looking at these plots. Of course the plot will depend on the design, bringing us to the next point.
Better-designed circuits' optimal frequency is higher
One hard part of circuit design is, you're basically making a hugely parallel system, where many parts do different things. Each part doing the same thing would be easy – they all take the same time, duh, so no bottleneck. Conversely, each part doing something else makes it really easy to create a bottleneck – and really hard to balance the parts (it's hard to tell exactly how much time a piece of work takes without trying, and there are a lot of options you could try, each breaking the work into different parts.)
You need to break the harder things into smaller pipeline stages (yes, a cost in itself as we've just said – but usually a small cost unless you target really high frequencies and so have to break everything into umpteen stages.) Pipelining is hard to get right when the pipeline stages are not truly independent, and people often recoil from it (a hardware bug is on average more likely to be catastrophically costly than somewhat crummier performance.) Simpler designs also shorten schedules, which may be better than reaching a higher frequency later.
So CPUs competing for a huge market on serial performance and (stupidly) advertised frequency, implementing a comparatively stable instruction set, justified the effort to overcome these obstacles. (Sometimes to the detriment of consumers, arguably, as say with Pentium 4 – namely, high frequency, low serial performance due to too much pipelining.)
Accelerators are different. You can to some extent compensate for poor serial performance by throwing money at the problem - add more cores. Sometimes you don't care about extra performance – if you can decode video at the peak required rate and resolution, extra performance might not win more business. Between frequency improvements and architecture improvements/implementing a huge new standard, the latter might be more worthwhile. And then the budgets are generally smaller, so you tend to design more conservatively.
So AFAIK this is why so many embedded accelerators had crummy frequencies when they started out (and they also had apologists explaining why it was a good thing). And that's why some of the accelerators caught up – basically it was never a technical limitation but an economic problem of where to spend effort, and changing circumstances caused effort to be invested into improving frequency. And that's why if you're making an accelerator core which is 3 times slower than the CPU in the same chip, my first guess is your design isn't stellar at this stage, though it might improve – if it ever has to.
P.S. I'll say it again – my perspective can be skewed; someone with different experience might point out some oversimplifications. Different process nodes and different implementation constraints mean that what's decisive in one's experience is of marginal importance in another's experience. So please do correct me if I'm wrong in your experience.
P.P.S. Theoretically, a design running at 1 GHz might be doing the exact same amount of work as a 2 GHz design – if the pipeline is 2x shorter and each stage in the 1 GHz thing does the work of 2 stages in the 2 GHz thing. In practice, the 1 GHz design will have stages doing less work, so they complete in less than 1 nanosecond (1/1GHz) and are idle during much of the cycle. And this is why you want to invest some effort to up the frequency in that design – to not have mostly-idle circuitry leaking power and using up area. But the theoretically possible perfectly balanced 1 GHz design is a valid counter-argument to all of the above, I just don't think that's what most crummy frequencies hide behind them.
Update: here's an interesting complication – Norman Yarvin's comment points to an article about near-threshold voltage research by Intel, from which it turns out that a Pentium implementation designed to operate at near-threshold voltage (at a near-2x cost in area) achieves its best energy efficiency at 100 MHz – 10x slower than its peak frequency but spending 47x less energy. The trouble is, if you want that 10x performance back, you'd need 10 such cores for an overall area increase of 20x, in return for overall energy savings of 4.7x. Other points on the graph will be less extreme (less area spent, less energy saved.)
So this makes sense when silicon area is tremendously cheaper than energy, or when there's a hard limit on how much energy you can spend but a much laxer limit on area. This is not the case most of the time, AFAIK (silicon costs a lot and then it simply takes physical space, which also costs), but it can be the case some of the time. NTV can also make sense if voltage is adjusted dynamically based on workload, and you don't need high performance most of the time, and you don't care that your peak performance is achieved at a 2x area cost as much as you're happy to be able to conserve energy tremendously when not needing the performance.
Anyway, it goes to show that it's more complicated than I stated, even if I'm right for the average design made under today's typical constraints.