A Sokoban levels design programming contest

I hate puzzles with a passion; I think of them as Gordian knots best untied with a sword, a machine gun or whatever else you can bring to bear on the problem.

The world of computer programmers, however – the world which I entered with the sole purpose of working for the highest bidder – is a world full of people who sincerely love puzzles. And if you visit this blog, perhaps you're one of these people.

If so, you might be pleased to learn about the recently launched Sokoban levels design contest, operated by gild – a great hacker, my long-time co-worker, an IOCCC winner, and a participant in Al Zimmerman's programming contests which he cites as inspiration for his own new contest.

The rules are precisely defined here; the general idea is to design Sokoban levels falling into different problem classes. Submitted levels are scored based on the length of the shortest solution (longer is better), and normalized s.t. the level taking the most steps to solve right now gets the score of 1. With 50 problem classes, the maximal overall score is 50. But with all the other cunning contestants submitting their own levels, your levels' score might be dropping every day or hour!

And I really mean every day or hour – even now at the very beginning there are several submissions per day. Judging by the rankings page, people spread around the globe are busy improving their Sokoban level-designing software and resubmitting better solutions. (Or they might be doing it in their heads; you don't need to submit any code, just the levels. I hear that occasionally a contestant using no software at all gets a rather good result in an Al Zimmerman's contest… What happens inside the heads of such people I don't know.)

There's also a discussion group, and if you're among the cunningest, most tenacious puzzle lovers who'll get to the top, you'll get – guess what? – a puzzle! Specifically, a gift card which you can use to buy, say, this Rubik's cube – or rather a Rubik's fuck-knows-what. I guess cubes are for sissies:

I personally think it's a bloody cool community/subculture to be in; a pity that I don't quite have the brains for it. (Could I hold my own if I really liked it? Or maybe I would really like it if I could hold my own? These are the types of questions it all brings to my mind.)

Anyway – if you're that sort of guy, it looks like a great thing for your brain to chew on. Good luck!

We're hiring

On behalf of easily the hottest computer company in Israel right now (self-driving cars, big noisy IPO, etc. etc.), I'm looking for people in the two areas I'm involved with:

  • software infrastructure / host tools
  • chips / hardware-software co-design

If you're a strong programmer, especially one who's also interested in math/vision/learning or low-level/hardware/optimization or both, you'll probably find enjoyable stuff to do. It looks like there's still plenty of room for growth, which usually means a lot of work to choose from. On the other hand, there's no risk of "wasting" your effort on a product which might never ship.

If you aren't a programmer but a hardware hacker (experienced in ASIC/FPGA or just straight out of school), we're hiring for these positions as well – I'll forward your CV to the relevant people.

People who work autonomously are a great fit, and they tend to like us - we gladly dial the extent of management down to near-zero levels when appropriate.

Also, I'd love to find people who tend to be at the center of things working with others – though there's always something in stock for people who'd rather spend time alone with a worthy problem.

Relevant experience – might be a plus of course, but not a must.

Send email to Yossi.Kreinin@gmail.com, and tell your friends. Seriously, one never knows and it's kinda awkward to sell vaguely described positions at a personal blog, but I think it can be a really nice opportunity.

Capital vs labor: who risks more?

AFAIK, in the developed world, the income tax rate is often higher than the capital gains tax. In particular, income is taxed "progressively", meaning that higher income results in a higher rate – which doesn't always happen with capital gains. (It looks like it does happen in the US, but income appears to be taxed "more progressively".)

One justification for this is that investors risk losing much or all of their capital. Workers, on the other hand, are guaranteed their wages. To some extent, the difference in the tax rates compensates for the difference in the risks.

To me this is sensible at first glance but completely harebrained on second thought:

  • The worker's big risk is choosing the profession. Higher-income professions often require a more expensive education. If the demand for the skills drops in the future, the income might fail to cover the worker's investment into acquiring these skills. Assuming that learning ability drops with age, the risk increases proportionately.
  • Moreover, this risk is not diversified. Even a small-time investor can spread his risk using some sort of index fund, betting on many stocks at a time. For a worker, on the other hand, there's no conceivable way to be 25% lawyer, 25% carpenter, 25% programmer and 25% neurosurgeon.

The latter point may not interest utilitarian economists focused on GDP "the greatest good for the greatest number" – for society as a whole, risk is always diversified, even if Joe Shmoe is stuck with the crummiest profession that looked really promising when he chose it. Moreover, an efficient market would provide a way to insure against the risk of poorly choosing your occupation. (I don't think real-life markets did provide such a thing, though I might be mistaken.)

But the first point – that the risk exists, and that it's more likely than not to be proportionate to the projected income – by itself kinda erodes the ground underneath an income tax higher than the capital gains tax, not? I mean, regardless of the motivation, it seems internally inconsistent. Unless the risks are all quantified and it all magically cancels out.

It's not that I'm complaining – unlike most professionals, we programmers get stock options. (Speaking of which - central bankers of the world, The Programmers' Guild says thanks a bunch for QE and ZIRP! Remind us to send some flowers.) I just honestly don't get it.

Econ-savvy readers: what do I miss?

(Please don't accuse me of failing to use the Google. Maybe I did so badly, but I did try using the Google. The Google told me, for instance, that the optimal capital gains tax is zero, because "you can't transfer from capitalists to workers", because the transfer depletes the capital pool or something. But I didn't understand how it accounts for the case where the workers are also capitalists because they invest their savings; and what if there's a progressive tax on capital gains? Is it, or is it not possible to "transfer from capitalists to workers" the flesh and blood people, as opposed to abstract categories, regardless of whether you'd want to? Or if that's all nonsense – then is the optimal income tax also zero, because, hey, you're actually taxing human capital gains? Bringing us back a couple hundred years ago where we just have a sales tax? And again, not that I'd complain, I just don't quite follow the logic.)

(Also from the Google there's the claim that since nominal capital gains are taxed, the tax rate is actually higher than it looks because of inflation. But then the solution would be to tax inflation-adjusted gains, not to lower the rate arbitrarily, not? Also, wages are AFAIK the last set of prices to rise upon inflation – the "stickiest" – so workers' investment in their skills is continuously nibbled at by inflation as well. And the point made about "discouraging savings" – well, a high income tax discourages costly education and the subsequent ongoing investment in one's skills, which is a form of investment/savings/call it what you like. Same diff.)

I think it's pretty obvious that workers risk much more than investors. Whether this should lower their taxes I don't know, but it sort of makes sense to me.

Do company names actually matter?

This is a bit of a trite thought, but: Can it be that company names actually matter? Consider some examples:

  • Microsoft:  stands for "microprocessor software". Still the dominant software vendor for descendants of the original microprocessor. Never made commercially successful hardware.
  • Intel: stands for "integrated electronics" (chips.) Upon commoditization of DRAM, successfully pivoted to microprocessors – a harder-to-design, easier-to-differentiate kind of chip. Growth slowed when less "vertically integrated" competitors emerged, with both chip manufacturing (fabs) and circuit design/"IP" (CPUs, GPUs etc.) getting commoditized (TSMC + ARM is cheaper than Intel.) Had/has a consumer product division which was never successful.
  • Google: "a very large number", does "things that scale". After its original service, web search, had success with webmail and a smartphone OS. Not known for particularly attentive customer support (support doesn't scale.)
  • Amazon: "plenty", does "things that scale". Originally an online retail company, had success with e-book readers and "cloud infrastructure". Does everything it can so you never have to talk to support, which doesn't scale.
  • Samsung: "three stars", the world "three" representing "big, numerous and powerful". Indeed they are, in a wide variety of areas.
  • Facebook: a one-product company. Buys up other messaging, sharing and social networking companies so people can't flock elsewhere. Facebook phone tanked.
  • Twitter: another one-product company.
  • Apple: name stands for sweet stuff (say loyal users to befuddled me.) Used to be called "Apple Computer", renamed to plain "Apple" in 2007. Successfully incorporated device and chip design, server- and client-side software and retail into its business.
  • IBM: international business machines. A long-time near-monopoly in business computing, still going relatively strong in data storage systems, job schedulers and other corporate IT stuff. No considerable success in any consumer market despite the IBM PC being the first wildly successful computer for consumers (PC division eventually sold to Lenovo.)
  • The Walt Disney company: vigorously lobbies for copyright protection for characters created during Walt's days. Few characters created after Walt's death are nearly as successful; arguably the most successful ones came from acquiring Pixar, Marvel and Lucasfilm. Money spent on buying Pixar probably could have been saved if the company didn't fire John Lasseter but instead let him develop his ideas about computer animation. Would Walt have failed to see the potential?
  • Toyota: originally "Toyoda", a family name. Today's CEO is a founder's descendant. The founder's first name is not in the company's name. The company does not seem weakened by the founder's demise.

Some of my Google/Wikipedia-based "research" might be off. And I doubt that founding a company called "Huge Profits" would necessarily net me huge profits. However:

  1. The company name often reflects founders' vision.
  2. Such a vision doesn't change easily, especially if a company is successful – because success is convincing/addictive, and because an organization was by now built around that vision.
  3. Once the founders are gone, the vision becomes even harder to change because nobody has the authority and confidence. (Indeed one way in which organizations die, IMO, is through the continuous departure of people with authority that are not succeeded by anyone with comparable authority. Gradually, in more and more areas, things operate by inertia without adapting to changes.) Steve Jobs had to come back to Apple Computer to rename it to just "Apple".

So if you're the rare person capable of starting a successful company, and you insist on it being a huge success and not just a big one, make the vision as unconstrained as you can imagine.

P.S. For investment advice taking company names into account, I recommend Ian Lance Taylor's excellent essay titled Stock Prices.

A better future (a programmer's first animated post)

Whatever else happens, you made a movie… Nobody can take that away. A hundred years from now, when we're all dead and gone, people will be watching this fucking thing.

Tony Soprano to his nephew (whom he murders over this movie in a few episodes)

So I made a 90-second animated, um, I guess it's a blog post. I don't know about a hundred years from now, but I proudly invite you to watch the fucking thing right now:

I hear that it's considered classy for animators, filmmakers and such to let their work stand on its own, either refraining from commentary or making it vague. However, anxious to secure my spot in eternity, I decided to rush my immortal masterpiece out the door, so I cut everything I could.

I then realized that I left out a delicate point which, despite my embarrassment, I must mention. Luckily, typing is much easier than animating, so the following afterthought was quick to put down [1]. Here goes.

The short isn't exactly a documentary, but real-life me did switch to a part-time job to free some time for animating, drawing, etc. I figured this mundane little step was a good topic for a starting filmmaker because it turned out to be surprisingly controversial. Here are some of the reactions I received:

  • "So you finally got fed up with the job?"
  • "Part-time? But everyone here needs you!"
  • "Wow, I'm jealous! I want to work less, too. They pay you the same, right?" No, they pay less, I said. "Oh. Ha-ha. Work less, get less. Interesting!"
  • "You should really work more, not less, while you're young. Futurists predict a huge global pension crisis, so save for your retirement!"
  • "I doubt this will fly with the big deadline coming. Who's gonna do all the work? Not me!" (This guy works part-time himself – very productively.)
  • "Doesn't your wife object?" (Actually, my light table is Rachel's gift. I don't know when/if I'd ever get one on my own.)

These comments suggest that many people want to work less, but something is keeping them from doing it. I can certainly relate to that. It took me 10 years to decide to work part-time – and then 5 more years to actually do it.

Why is it so hard?

My own reasons mostly revolve around money. Where I live, money is much easier to make programming than animating (good luck even finding a half-stable job working on animated features.)

Hence "I went into programming for the money", as I said in the short – as I always say. And initially I figured I'd work all I can and retire early – the opposite of working part-time and animating in my spare time ("settling for a fraction of the dream"). And you've just seen how I changed my mind.

But there's another thing, which I usually don't say and which I must reluctantly admit. You see, I came for the money, and then I started liking the getting paid part.

What's the difference between money and getting paid? There's a world of difference!

Winning a lottery is a way to obtain money without getting paid for a service. And spending your wage on designer clothes is a way to get paid without having any money left. The difference is this:

  • Money lets you buy things – food, living space, spare time, etc. It's about options.
  • Getting paid tells you the value of your service to whoever paid you. It's about achievement.

I like getting paid, I must admit despite the embarrassment.

Note that I'm not ashamed in the slightest to like money (options) and to have chosen a profession with the sole purpose of maximizing income.

Some people believe that you can't be happy doing something you don't love – and that you can't be any good at it, hence you won't make that much money, either. I disagree.

I'll tell you who my role model is, as a computer programmer. It's neither Bill Gates nor Richard Stallman. My role model is Alec Guinness, whom you probably remember as Obi-Wan Kenobi from Star Wars. Wikipedia says:

In letters to his friends, Guinness described the film as "fairy tale rubbish" <…>

He was one of the few cast members who believed that the film would be a box office hit;  he negotiated a deal for 2% of the gross royalties <…>

Lucas and fellow cast members … have spoken highly of his courtesy and professionalism, both on and off the set. Lucas <said> that Guinness contributed significantly to achieving completion of the filming.

Here's a man working on something he disliked because it paid – and delighting his target audience and colleagues alike. To me it shows that "extrinsic motivation" – money – is a perfectly good primary motivation, contrary to some researchers' conclusions.

I'm in it for the money – hence, I'll never get bored and lose interest, as long as there's money to be made. I'll dutifully work on the unpleasant parts necessary to get things actually done (and get paid). Not liking programming that much, I try to keep my programs short and easy to maintain and extend – so I can program less.

These are all desirable traits – and not everyone genuinely loving programming has them. Think about it. Who's the better henchman – the psychopath murdering for the thrill, or the coldblooded killer who's in it for an early retirement? Same thing here.

I'm your perfect henchman. Pay me, give me some time alone with your computers, and when you come back, you'll find them doing your bidding. Of this I am not ashamed.

Recently, however, I noticed that I'm no longer the coldblooded henchman I used to be, that I started to enjoy the thrill of the kill for its own sake. And this I cannot admit without blushing.

That's what getting paid does to the weak-minded. It warped my value system. The phases of my transition – or should I say my moral decay – went something like this:

  1. I program because they pay me.
  2. Programming is good because they pay me.
  3. Programming is good.
  4. Programming is good! I think I'll go program right now. Or read about it. Or write something about it. All in my spare time.

And there you have it. "Achievement" has been redefined to mean "that thing you get paid for".

This is how I ended up with a website dedicated largely to programming. Then a programming blog on the site and a similar blog elsewhere. Then came the ultimate downfall: open-source programs on GitHub, written during evenings and weekends.

And I don't regret the writing. Keeping readers' attention on my very dry programming-related subjects is a worthy challenge for any starting storyteller.

But programming for free? Me? If you're good at something, never do it for free! Oh, how the weak-minded have fallen.

Sometimes you need to hit rock bottom to begin rising. So it was with me. Realizing that I've just programmed for free for several weekends in a row made me think. Hard.

"I can't believe you," I said to myself. "All this money-chasing at least made some sense. But programming for free? Why not draw in your spare time instead? What's wrong with you?"

"Not so fast," said self. "For free or not, at least here you are doing something you're good at. You know you're good – you get paid for it! Would they pay you for drawing? Not so soon. Maybe never. Even if you're any good. In fact you'll never know if you're any good. Not if you're never paid. Nor if you're paid badly, which happens all the time in those arty parts of the world, even to the best. Why not stick to things you're good at – that you know you're good at?"

Could you believe this guy? Well, I wouldn't have any of that.

"You shameless, hypocritical, baiting-and-switching COWARD," I screamed at self at the top of my lungs. "You always said programming was for the money – to buy time, to buy that bloody creative freedom you kept chattering about! And now you say I should keep programming because I got good at it? But of course I got good – I've been doing it all this time! I could have gotten just as good at drawing – I still can – if I have the time!"

"And you say that now when I can afford some spare time," I went on, "I should regardless stick to what I'm good at, which by now is programming? Are you hinting that I won't ever draw very well? Is that why you suggested a career in programming in the first place – because you didn't believe I could draw? Was that chanting about needing money one big lie all along? Tell me, you lying bastard! I'm gonna -"

"OK, OK, chill, man, CHILL!" Self looked scared and unsettled. He clearly didn't see it coming. Now he was looking for some way to appease me. "You know," said self, "maybe you're right. Remember how you're always proud of taking a long-term view? Of how you care today about things 5, even 10 years ahead?"

I smiled smugly. Indeed I was proud of my long-term-centered, strategic thinking. Bob Colwell – the legendary computer architect – once said that it's the architect's duty to think about the long term, because nobody else will. I so identified with that. (Colwell and I are both computer architects, you see – just, um, of different calibers.)

"Well," said self, "you should be proud. Too many lose sight of the future because of today's small but pressing worries!"

"Yeah, yeah, yeah." I was losing my patience. "Thanks but no thanks for your brown-nosing. Listen, are you playing bait and switch on me again? What does this have to do with drawing?"

"But that's the point – it's the same thing," self exclaimed, "it's about long-term thinking! Sure, in the short term, maybe you're better at programming than drawing. But keep practicing and yes, of course you'll get good at drawing! Use your favorite superpower – your ability to imagine the future vividly, to practically live there – to overcome the temptation to stick to your comfort zone! Secure a better future today! Be the shrewd guy investing in a little unknown startup – yourself the would-be animator – to reap great benefits down the road! Be -"

"I get it. That's what I said though, isn't it? Let's practice – let's draw in the spare time."

"Sure. Sure! You're right," said self submissively. "I'm actually helping you, see? I'm telling you how to use your strengths to take the plunge!"

"Thaaaanks. You know what? We'll work 20% less hours every week to get some more spare time."

"What?! But, the money -"

"SHUT UP. Shut up for your own good. I made you enough money."

"How much is enough?" asked self.

Now he really got me worked up.

"How much is enough, he dares to ask?! How come you never told me? I was toiling for you year after year. When did you plan to give me some rest?"

"Well…" mumbled self. Obviously he had no answer. I should have known all along!

"Nothing is ever enough for you, isn't it?.. Well, listen up. We'll work 20% less hours every week. And every time you whine about the money, which you will, I'll say – sure, that spare time was an expensive present. Indeed we shouldn't waste it idly surfing the net. Let's draw! See? I'll use your bitching and complaining about the money to get myself to draw!" – I shrieked and cackled evilly.

"That's it," self whispered, "he lost his marbles. Talking to himself, too. Oh my oh my oh -"

"Shut your lying mouth. I'm not done. You'll find us a weekly drawing class. Also we're going to the zoo every week. I'll draw animals, and if you say nasty things about my drawings, we'll keep going anyway. Don't count on my tendency to follow inspiration rather than routine. I know how you can keep killing inspiration for months on end with your snarky remarks. Guess what – I'll still go. And -"

"OK, OK. OK! We'll go, we'll spend 20% of your salary so you can draw those bloody animals. Just calm down already. Sheesh!"

And thus my self unconditionally capitulated, and we lived happily ever after. Thus ends my treatment of that delicate point – the difference between wanting money and wanting to get paid.

Is the spare time – the most expensive gift I ever got myself – worth its price? You bet!

[1] When I say this was "quick to put down", I mean that it took hours, and pages upon pages of words written to be thrown away. (I accidentally published a draft showing all that wasted effort.) It's still way quicker than animating…

Sorry for having published a draft

I've just unpublished it.

Yikes. You know what's a good metaphor for a draft? Someone's undressed self who in a less than fully awake state makes uncertain steps towards the toilet. That's a first draft of that person's publicly presentable self.

And publishing a draft? It's a bit like being photographed in this state. I mean, we all pass through that state every morning. Nothing to be ashamed of, then. And yet.

I think a lot of RSS aggregators never delete a published post, even if it then disappears from the up-to-date RSS stream of the original site. Bastards. Someone should spread their naked morning photos all over the internet and see how they like it.

Anyway, sorry, and the finished thing (10 times shorter without all the written-then-thrown-away bits) is coming up soon.

A better future

I accidentally published this before finishing it. I replaced the longish unfinished text with this in the hope of getting RSS readers to forget my blunder. Sorry for all the clutter.

Things from Python I'd miss in Go

Let's assume Go has one killer feature – cheap concurrency – not present in any other language with similar serial performance (ruling out say Erlang). And let's agree that it gives Go its perfect niche – and we're only discussing uses of Go outside that niche from now on. So we won't come back to concurrency till the very end of the discussion. (Update – the "killer concurrency" assumption seems false… but anyway, we'll get to it later.)

***

Brian Kernighan once called Java "strongly-hyped". Certainly his ex-colleagues' Go language is one of the more strongly-hyped languages of today – and it happens to have a lot in common with Java in terms of what it does for you.

Rob Pike said he thought C++ programmers would switch to Go but it's Python programmers who end up switching. Perhaps some do; I, for one, am a Python programmer not planning to switch to Go – nor will I switch to Go from (the despicable) C++ or (the passable) C.

Why so?

What does Go have that C++ lacks? Mandatory garbage collection, memory safety, reflection, faster build times, modest runtime overhead. C++ programmers who wanted that switched to Java years ago. Those who still stick to C++, after all these years, either really can't live with the "overheads" (real time apps – those are waiting for Rust to mature), or think they can't (and nothing will convince them except a competitor eventually forcing their employer out of business).

What does Go have that Python lacks? Performance, static typing. But – again – if I needed that, I would have switched to Java from Python years before Go came along. Right?

Maybe not; maybe Java is more verbose than Go – especially before Java 8. Now, however, I'd certainly look at Java, though perhaps Go would still come out on top…

Except there are features I need that Go lacks (most of which Java lacks as well). Here's a list:

Dynamic code loading/eval

Many, many Python uses cases at work involve loading code from locations only known at run time – with imp.load_source, eval or similar. A couple of build systems we use do it, for instance – build.py program-definition.py, that kind of thing. Also compilers where the front-end evals (user's) Python code, building up the definitions that the back-end (compiler's code) then compiles down to something – especially nice for prototyping.

I don't think Go has a complete, working eval. So all those use cases seem basically impossible – though in theory you could try to build the eval'ing and eval'd code on the fly… Maybe in some cases it'd work. Certainly it's not something Go is designed for.

REPL

No REPL in Go – a consequence of not having eval. There are a few programs compiling Go on the fly, like go-repl, but you can't quite build up arbitrary state that way – you're not creating objects that "live" inside the session. Not to mention the following little behavior of Go – a production-friendly but prototyping-hostile language if there ever was one:

> + fmt
! fmt> fmt.Println("Hello, world!")
Hello, world!
! fmt> println("This won't work since fmt doesn't get used.")
Compile error: /tmp/gorepl.go:2: imported and not used: fmt

At home, I use DreamPie because I'm on Windows, where a scanner and a Wacom tablet work effortlessly, but there's no tcsh running under xterm. Perhaps go-repl could replace DreamPie – after all, you can't exactly build up state in live objects in tcsh which I'm quite used to. But I kinda got used to having "live objects" in DreamPie as a side effect of using it instead of tcsh.

And certainly Python REPLs already did and always will get more love than Go's because Python is designed for this kind of thing and Go is not.

numpy

So Python is slow and Go is fast, and just as readable, right? Try doing linear algebra with large matrices.

Python has numpy which can be configured as a thin wrapper around libraries like Intel's MKL – utilizing all of your cores and SIMD units, basically unbeatable even if you drop down to assembly. And the syntax is nice enough due to operator overloading.

Go doesn't have operator overloading, making scientific computing people or game programmers cringe. Go is ugly when it comes to linear algebra. Will it at least be as fast as Python with MKL? I doubt it – who'll continuously invest efforts to make something hopelessly ugly somewhat faster at these things? Even if someone does – even if someone did – who cares?

Why not have operator overloading? For the same reasons Java doesn't: "needless complexity". At least in Go there's a "real" reason – no exceptions, without which how would you handle errors in overloaded operators? Which brings us to…

No exceptions

Meaning that every time I open a file I need to clutter my code with an error handling path. If I want higher-level error context I need to propagate the error upwards, with an if at every function call along the way.

Maybe it makes sense for production servers. For code deployed internally it just makes it that much longer – and often error context will be simply omitted.

More code, less done. Yuck.

Update: a commenter pointed out, rather empathically I might add, that you can panic() instead of throwing an exception, and recover() instead of catching it, and defer() does what finally does elsewhere. Another commenter replied that the flow will be different in Go in some cases, because defer() works at a function level so you effectively need a function where you could use a try/finally block elsewhere.

So the tendency to return error descriptions instead of "panicking" is a library design style more than strictly a necessity. This doesn't change my conclusions, because I'd be using libraries a lot and most would use that style. (And libraries panicking a lot might get gnarly to use because of defer/recover not working quite like try/finally; "strict necessities" are not that different from this sort of "style choice".)

GUI bindings

We use Python's Qt bindings in several places at work. Go's Qt bindings are "not recommended for any real use" yet/"are at an alpha stage".

This might change perhaps – I don't know how hard making such bindings is. Certainly concurrent servers have no GUI and Go's designers and much of its community don't care about this aspect of programming.

Not unlike the numpy situation, not? Partly it's simply a question of who's been around for more time, partly of what everyone's priorities are.

All those things, combined

We have at work this largish program full of plugins that visualizes various things atop video clips, and we're in the process of making it support some kinds of increasingly interactive programming/its own REPL. Without eval, operator overloading for numeric stuff, the right GUI bindings or exceptions it doesn't seem easy to do this kind of thing. To take a generally recognizable example – Python can work as a Matlab replacement for some, Go can't.

Or, we do distributed machine learning, with numpy. What would we do in Go? Also – would its concurrency support help distribute the work? Of course not – we use multiple processes which Go doesn't directly support, and we have no use for multiple threads/goroutines/whatever.

Conclusion

Just looking at "requirements" – ignoring issues of "taste/convenience" like list comprehensions, keyword arguments/function forwarding etc. – shows that Go is no replacement for Python (at least to the extent that I got its feature set right).

What is Go really good at? Concurrent servers.

Which mature language Go competes with most directly? Java: rather fast (modulo gc/bounds checking), rather static (modulo reflection/polymorphic calls), rather safe (modulo – funnily enough – shared state concurrency bugs). Similar philosophies as well, resulting in feature set similarities such as lack of operator overloading – and a similar kind of hype.

(Updated thanks to an HN commenter) Does Go have a clear edge over Java when it comes to concurrency? With Quasar/Pulsar, maybe not anymore, though I haven't looked deep enough into either. Certainly if you're thinking about using Go, Java seems to be a worthy option to consider as well.

Would I rather be programming in Go than C or C++? Yes, but I can't. Would I rather be programming in Go than Python? Typically no, and I won't.

P.S.

Is there a large class of Python programmers who might switch to Go? Perhaps – people working on any kind of web server back-end code (blogs, wikis, shops, whatever). Here the question is what it takes to optimize a typical web site for performance.

If most of the load, in most sites, can be reduced drastically by say caching pages, maybe learning a relatively new language with a small community, less libraries and narrower applicability isn't worth it for most people. If on the other hand most code in most websites matters a lot for performance, then maybe you want Go (assuming other languages with similar runtime performance lack the cheap concurrency support).

I'm not that knowledgeable about server code so I might be off. But judging by the amount of stuff done in slow languages on the web over the years, and working just fine (Wikipedia looks like a good example of a very big but "static enough" site), maybe Go is a language for a small share of the server code out there. There are however Facebook/Twitter-like services which apparently proved "too dynamic" for slow languages.

Which kind of service is more typical might determine how much momentum Go ultimately gains. Outside servers – IMO if it gains any popularity, it will be a side-effect of its uses in servers. (And the result – such as doing linear algebra in Go – might be as ugly as using Node.js for server-side concurrency, "leveraging" JavaScript's popularity…)

And – why Python programmers would then be switching to Go but not C++ programmers? Because nobody is crazy enough to write web sites in C++ in the first place…

How to make a heap profiler

I have a new blog post at embeddedrelated.com describing heapprof, a 250-line heap profiler (C+Python) working out of the box on Linux, and easy to port/tweak.

Why bad scientific code beats code following "best practices"

I've just read "The Low Quality of Scientific Code", which claims that code written by scientists comes out worse than it would if "software engineers" were involved.

I've been working, for more than a decade, in an environment dominated by people with a background in math or physics who often have sparse knowledge of "software engineering".

Invariably, the biggest messes are made by the minority of people who do define themselves as programmers. I will confess to having made at least a couple of large messes myself that are still not cleaned up. There were also a couple of other big messes where the code luckily went down the drain, meaning that the damage to my employer was limited to the money wasted on my own salary, without negative impact on the productivity of others.

I claim to have repented, mostly. I try rather hard to keep things boringly simple and I don't think I've done, in the last 5-6 years, something that causes a lot of people to look at me funny having spent the better part of the day dealing with the products of my misguided cleverness.

And I know a few programmers who have explicitly not repented. And people look at them funny and they think they're right and it's everyone else who is crazy.

In the meanwhile, people who "aren't" programmers but are more of a mathematician, physicist, algorithm developer, scientist, you name it commit sins mostly of the following kinds:

  • Long functions
  • Bad names (m, k, longWindedNameThatYouCantReallyReadBTWProgrammersDoThatALotToo)
  • Access all over the place – globals/singletons, "god objects" etc.
  • Crashes (null pointers, bounds errors), largely mitigated by valgrind/massive testing
  • Complete lack of interest in parallelism bugs (almost fully mitigated by tools)
  • Insufficient reluctance to use libraries written by clever programmers, with overloaded operators and templates and stuff

This I can deal with, you see. I somehow rarely have a problem, if anyone wants me to help debug something, to figure out what these guys were trying to do. I mean in the software sense. Algorithmically maybe I don't get them fully. But what variable they want to pass to what function I usually know.

Not so with software engineers, whose sins fall into entirely different categories:

  • Multiple/virtual/high-on-crack inheritance
  • 7 to 14 stack frames composed principally of thin wrappers, some of them function pointers/virtual functions, possibly inside interrupt handlers or what-not
  • Files spread in umpteen directories
  • Lookup using dynamic structures from hell – dictionaries of names where the names are concatenated from various pieces at runtime, etc.
  • Dynamic loading and other grep-defeating techniques
  • A forest of near-identical names along the lines of DriverController, ControllerManager, DriverManager, ManagerController, controlDriver ad infinitum – all calling each other
  • Templates calling overloaded functions with declarations hopefully visible where the template is defined, maybe not
  • Decorators, metaclasses, code generation, etc. etc.

The result is that you don't know who calls what or why, debuggers are of moderate use at best, IDEs & grep die a slow, horrible death, etc. You literally have to give up on ever figuring this thing out before tears start flowing freely from your eyes.

Of course this is a gross caricature, not everybody is a sinner at all times, and, like, I'm principally a "programmer" rather than "scientist" and I sincerely believe to have a net positive productivity after all – but you get the idea.

Can scientific code benefit from better "software engineering"? Perhaps, but I wouldn't trust software engineers to deliver those benefits!

Simple-minded, care-free near-incompetence can be better than industrial-strength good intentions paving a superhighway to hell. The "real world" outside the computer is full of such examples.

Oh, and one really mean observation that I'm afraid is too true to be omitted: idleness is the source of much trouble. A scientist has his science to worry about so he doesn't have time to complexify the code needlessly. Many programmers have no real substance in their work – the job is trivial – so they have too much time on their hands, which they use to dwell on "API design" and thus monstrosities are born.

(In fact, when the job is far from trivial technically and/or socially, programmers' horrible training shifts their focus away from their immediate duty – is the goddamn thing actually working, nice to use, efficient/cheap, etc.? – and instead they declare themselves as responsible for nothing but the sacred APIs which they proceed to complexify beyond belief. Meanwhile, functionally the thing barely works.)