"You have 59 posts and 103 drafts", says WordPress, and I guess it's right. SELECT COUNT() FROM table" doesn't lie, and then I've always had about 2 unpublished drafts per published post.
Let's try this: I'll give you this list of drafts and you'll tell me which of these you want me to write. I have a bunch, they're all fresh enough for me to still want to write them – I still remember what I meant to say – but help me prioritize.
So, here goes:
- FPGAs vs cellular automata: cellular automata are one example of trying to model computation using local rules, "because the laws of nature are local" or what-not. But real life is not like a game of life, and long wires are everywhere – from long axons to the trans-Atlantic cables we humans laid out right after learning how to do so. I'd write about how FPGAs, which are popular with people who like "locality" and "big flexible computing grids" and other such ideas, support complicated routing and not just simple connections to neighbors. (Which is why FPGAs are a real platform and not a pipe dream.)
- Mircoarchitecture vs macroarchitecture. I love this one, because it sounds like a somewhat original or at least not a very widely appreciated idea. Basically in hardware, there are two different ways to attack problems. Examples of micro vs macro are: out-of-order versus VLIW; SIMT vs SIMD; hardware prefetchers vs DMA engines; caches vs explicitly managed local memories; and software-driven processors vs FPGAs. Basically it's an implicit vs explicit thing – you can make hardware that cleverly solves the problem in a way which is opaque to software, or you can expose "bare" resources that software can manage in order to solve the problem itself. There are a whole lot of implications which I think aren't obvious, at least as a whole and perhaps separately – for example, one approach is friendlier to context switching than the other. This could be a good read for someone on a small team designing hardware – I could convince him to do macro and accept the drawbacks; micro is for Intel.
- Efficiency: shortcuts and highways. To get from A to B, you travel less – that's a shortcut – or travel more simply – that's a highway. (Highways are necessarily "simpler" than other paths – wider, straighter, less junctions – and they're expensive to build, so not every path can be a highway.) It appears to be a good metaphor for optimization and accelerator hardware, in many ways. For example, getting to the highway can be a bottleneck – similarly, transferring and organizing data for accelerators/optimized functions. There are other similarities. This should, in particular, be a nice read for someone good at algorithmic optimization (shortcuts) who is curious about "brute force" optimization (highways).
- "Don't just blame the author of this manual" – this is a quote from an actual manual that I like; the context is that the manual bluntly tells why a feature may likely do something different from what you think it does, and what you should do about it. Basically this spec is outrageous if you think of it as a contract – not much is promised to you - but very good as communication – you understand exactly what the designers were doing and what to expect as a result. The distinction between "specs as contracts" and "specs as communication" is an interesting one in general.
- Leibnitz management: I mention "the best of all possible worlds" together with the efficient market hypothesis, which is what some ways of putting the "efficiency" argument on its head remind me of. For instance, the market is efficient and therefore, if we spend $1M on software or equipment, they must be worth the money (the option of us being suckers and the vendor being sustained by suckers is ignored). Similarly, if you propose an improvement, you're told that it's not going to work since if it did, others would have done it already because of said "efficiency". And other similar notions, all popular with management.
- "No rules" is a rule, and not a simple one: I guess I don't like rules, so I tend to think we should have less of them. It recently occurred to me that what we'd really want is not less rules but simpler rules and that it's not the same thing. For example, if you have no rules about noise, then one has a right to make noise (good) but no right for silence (bad), which is a trade-off that some like and others don't – but on top of that, it creates ugly complications so isn't a simplification compared to having a rule against noise (for example, I can make noise near your property to devalue it). Similarly, giving someone "a right to configure" takes someone else's "right to certainty" – be it config files or different device screen sizes or whatever – also a trade-off. Someone's right to check in code without tests takes someone else's right to count on checked-out code to work. Someone's right to pass parameters of any type takes someone else's right to count on things being of a certain type. So, not only choosing the rules, but choosing the amount of rules is a hairy trade-off. Which goes against of my intuition that "less rules is good" , all else being equal.
- Does correctness result from design or testing? – two claims. 1: correctness always results from testing, never from design; if it wasn't tested, it doesn't work, and moreover, contains some sort of "conceptual" flaw and not just a typo. 2: however, the very property of testability always results from sound design. If it's sufficiently badly designed, no amount of testing can salvage it, because good testing is whitebox testing or at least "gray box" testing and bad design is impenetrable to the mind, so it's a black box.
- Testing is parallelizable, analysis isn't – a complementary idea (perhaps I'd merge them). Suppose you have $10M to throw at "quality" – finding bugs. You could massively test your program, or you could thoroughly analyze it – formal proofs or human proofreading or something. Testing you can parallelize: you can buy hardware, hire QA people and so on. The insight is that you can't really parallelize analysis: to even tell that two parts can be analyzed separately, a lot of analysis is required, and then frequently things truly can't be analyzed separately, because if you understand the fact that listing a huge directory is horribly slow on your filesystem, this understanding is worthless in isolation from the fact that some piece of your software creates huge directories. So analysis can't happen fast even if you're willing to pay money – you have to pay in time. Two things follow: when something is shipped fast and works, we can conclude that it's made to work through testing and not analysis; and, pushing for testing is characteristic of the private/commercial sector where money is cheaper, whereas pushing for analysis is characteristic of the public/academic sector where time is cheaper.
- Buying code takes more knowledge then building it - because when you build it yourself, you can afford to acquire much of the knowledge as you go, but when you're making a purchasing decision and you lack some of the required knowledge, then you'll buy the wrong thing and will then acquire the knowledge through having things not work – much too late. I'd give A LOT of real-life examples from my own real life; it's quite the problem, I believe. (There's frequently no way around buying code, especially if you're making hardware, but I think it's still an interesting angle on "buy vs build".)
- Make your code serial and your data scalar: I claim that things that don't work that way are not debuggable and most people have a hard time with them. For example, type systems (C++, Haskell, Scala, even CLOS), vector languages (APL, Matlab before they had a fast for loop), Prolog (even though I think solvers are a great idea), Makefiles (even though I think DSLs are a great idea), lazy evaluation (Haskell).
There's more but let's see what about these.