Low-level is easy
My previous entry has the amazingly interesting title "Blogging is hard". Gee, what a wuss, says the occasional passer-by.
Gotta fix my image, fast. Think, think, think! OK, here's what I'm going to tell you: low-level programming is easy. Compared to
higher-level programming, that is. I'm serious.
For starters, here's a story about an amazing developer, who shall rename nameless for the moment. See, I know this amazing
developer. Works for Google now. Has about 5 years of client-side web programming under his belt. And the word "nevermore"
tattooed all over his body (in the metaphorical sense; I think; I haven't really checked). One time, I decided that I have to
understand this nevermore business. "Amazing Developer", I said, "why have you quit the exciting world of web front-ends?" "I
don't like it", says the Amazing Developer. "But, but! What about The Second Dot Com Bubble? VC funds will beg you to take their
$1M to develop the next Arsebook or what-not. Don't you wanna be rich?"
"I really don't like web front-ends very much", he kept insisting. Isn't that strange? How do you explain it? I just
kept asking.
Now that I think of it, he probably was a little bit irritated at this point. "Look, pal", he said (in the metaphorical
sense; he didn't actually say it like that; he's very polite). "I have a license to drive a 5-ton truck. But I don't want a
career in truck driving. Hope this clarifies things". He also said something about your average truck driver being more
something than your typical web developer, but I don't remember the exact insult.
Now, I've been working with bare metal hardware for the last 4 years. No OS, no drivers, nothing. In harsh environments in
terms of hardware, by comparison. For example, in multi-core desktops, when you modify a cached variable, the cache of the other
processor sees the update. In our chips? Forget it. Automatic hardware-level maintenance of memory coherency is pretty fresh
news in these markets. And it sucks when you change a variable and then the other processor decides to write back its
outdated cache line, overwriting your update. It totally sucks.
A related data point: I'm a whiner, by nature. Whine whine whine. You've probably found this blog through the C++ FQA, so you already know all about my whining. And it's not like I
haven't been burnt by low-level bugs. Oh, I had that. Right before deadlines. Weekends of fun. So how come I don't run away from
this stuff screaming and shouting? Heck, I don't mind dealing with bare metal machines for the rest of my career. Well, trying
out other stuff instead can be somewhat more interesting, but bare metal beats truck driving, I can tell you that. To be fair, I
can't really be sure about that last point – I can't drive. At all. Maybe that explains everything?
I'll tell you how I really explain all of this. No, not right in this next sentence; there's a truckload of reasons (about 5
tons), so it might take some paragraphs. Fasten your seatbelts.
What does "high level" basically mean? The higher your level, the more levels you have below you. This isn't supposed to
matter in the slightest: at your level, you are given a set of ways to manipulate the existing lower-level environment, and you
build your stuff on top of that. Who cares about the number of levels below? The important thing is, how easily can I build my
new stuff? If I mess with volatile pointers and hardware registers and overwrite half the universe upon the slightest error, it
sucks. If I can pop up a window using a single function, that's the way I like it. Right? Well, it is roughly so, but there are
problems.
Problem 1: the stuff below you is huge at higher levels. In my humble opinion, HTML, CSS, JavaScript, XML
and DOM are a lot of things. Lots of spec pages. A CPU is somewhat smaller. You have the assembly commands needed to run C
(add/multiply, load/store, branch). You have the assembly commands needed to run some sort of OS (move to/from system
co-processor register; I dunno, tens of flavors per mid-range RISC core these days?). And you have the interrupt handling rules
(put the reset handler here, put the data abort handler here, the address which caused the exception can be obtained thusly).
That's all.
I keep what feels like most of ARM946E-S in my brain; stuff that's still outside of my brain probably never got in my way.
It's not a particularly impressive result; for example, the fellow sitting next to me can beat me ("physically and
intellectually", as the quote by Muhammad Ali goes; in his case, the latter was a serious exaggeration – he was found too dumb
for the US army, but I digress). Anyway, this guy next to me has a MIPS 34Kf under his skull, and he looks like he's having fun.
That's somewhat more complicated than ARM9. And I worked on quite some MFC-based GUIs (ewww) back in the Rich Client
days; at no point I felt like keeping most of MFC or COM or Win32 in my head. I doubt it would fit.
Problem 2: the stuff below you is broken. I've seen hardware bugs; 1 per 100 low-level software bugs and per
10000 high-level software bugs. I think. I feel. I didn't count. But you probably trust me on this one, anyway. How many
problems did you have with hardware compared to OS compared to end-user apps? According to most evidence I got, JavaScript does
whatever the hell it wants at each browser. Hardware is not like that. CPUs from the same breed will run the user-level
instructions identically or get off the market. Memory-mapped devices following a hardware protocol for talking to the bus will
actually follow it, damn it, or get off the market.
Low-level things are likely to work correctly since there's tremendous pressure for them to do so. Because otherwise,
all the higher-level stuff will collapse, and everybody will go "AAAAAAAAAA!!" Higher-level things carry less weight.
OK, so five web apps are broken by this browser update (or an update to a system library used by the browser or any other part
of the pyramid). If your web app broke, your best bet is to fix it, not to wait until the problem is resolved at the level
below. The higher your level, the loner you become. Not only do you depend on more stuff that can break, there are less people
who care in each particular case.
Problem 3: you can rarely fix the broken things below your level. Frequently you don't have the source. Oh,
the browser is open source? Happy, happy, joy, joy! You can actually dive into the huge code base (working below your normal
level of abstraction where you at least know the vocabulary), fix the bug and... And hope everyone upgrades soon enough. Is this
always a smart bet? You can have open source hardware, you know. Hardware is written in symbolic languages, with structured flow
and everything. The only trouble is, people at home can't recompile their CPUs. Life cycle. It's all about life cycle. Your
higher-level thingie wants to be out there in the wild now, and all those other bits and pieces live according to their
own schedules. You end up working around the bug at your end. Sometimes preventing the lower-level component author from fixing
the bug, since that would break yours and everybody else's workaround. Whoopsie.
How complicated is your workaround going to be? The complexity raises together with your level of abstraction, too. That's
because higher-level components process more complicated inputs. Good workarounds are essentially ways to avoid inputs which
break the buggy implementation. Bad workarounds are ways to feed inputs which shouldn't result in the right behavior,
but do lead to it with the buggy implementation. Good workarounds are better than bad workarounds because bad
workarounds break when the implementation is fixed. But either way, you have to constrain or transform the inputs. Primitive
sets of inputs are easier to constrain or transform than complicated sets of inputs. Therefore, low-level bugs are easier to
work around. QED.
Low-level: "Don't write that register twice in a row; issue a read between the writes". *Grump* stupid hardware. OK, done.
Next.
High-level: "Don't do something, um, shit, I don't know what exactly, well, something to interactive OLE
rectangle trackers; they will repaint funny". I once worked on an app for editing forms, much like the Visual Studio 6 resource
editor. In my final version, the RectTracker would repaint funny, exactly the way it would in Visual Studio 6 in
similar cases. I think I understood the exact circumstances back then, but haven't figured out a reasonable way to avoid them.
Apparently the people working at that abstraction level at Microsoft couldn't figure it out, either. What's that? Microsoft
software is always crap? You're a moron who thinks everything is absolutely trivial to get right because you've never done
anything worthwhile in your entire life. Next.
Problem 4: at higher levels, you can't even understand what's going on. With bare metal machines, you just
stop the processor, physically (nothing runs), and then read every bit of memory you want. All the data is managed by a single
program, so you can display every variable and see the source code of every function. The ultimate example of the fun that is
higher-level debugging is a big, slow, hairy shell script. "rm: No match." Who the hell said that, and how am I
supposed to find out? It could be ten sub-shells below. Does it even matter? So what if I couldn't remove some files? Wait, but
why were they missing – someone thought they should be there? Probably based on the assumption that a program should
have generated them previously, so that program is broken. Which program? AAARGH!!
OK, so shell scripts aren't the best example of high-level languages. Or maybe you think they are; have fun. I don't care. I
had my share of language wars. This isn't about languages. I want to move on to the next example. No shell scripts. You have
JavaScript (language 1) running inside HTML/CSS (languages 2 & 3) under Firefox (written in language 4) under Windows (no
source code), talking to a server written in PHP (language 5, one good one) under Linux (yes source code, but no way to do
symbolic debugging of the kernel nonetheless). I think it somewhat complicates the debugging process; surely no single debugger
will ever be able to make sense of that.
Problem 5: as you climb higher, the amount of options grows exponentially. A tree has one root, a few thick
branches above it, and thousands of leaves at the top. Bend the root and the tree falls. But leaves, those can grow in whichever
direction they like.
Linkers are so low-level that they're practically portable, and they're all alike. What can you come up
with when you swim that low? Your output is a memory map. A bunch of segments. Base, size, bits, base, size, bits. Kinda limits
your creativity. GUI toolkits? The next one is of course easier to master than the first one, but they are really
different. What format do you use to specify layouts, which part is data-driven and which is spelled as code? How do you handle
the case where the GUI is running on a remote machine? Which UI components are built-in? Do you have a table control with cell
joining and stuff or just a list control? Does your edit box check spelling? How? I want to use my own dictionary! Which parts
of the behavior of existing controls can be overridden and how? Do you use native widgets on each host, surprising users who
switch platforms, or roll your own widgets, surprising the users who don't?
HTML and Qt are both essentially UI platforms. Counts as "different enough" for me. Inevitably, both suck in different ways
which you find out after choosing the wrong one (well, it may be obvious with those two from the very beginning; Qt and gtk are
probably a better example). Porting across them? Ha.
The fundamental issue is, for many lower-level problems there's The Right Answer (IEEE floating point). Occasionally The
Wrong Answer gains some market share and you have to live with that (big endian; lost some market share recently). With
higher-level things, it's virtually impossible to define which answer is right. This interacts badly with the ease of hacking up
your own incompatible higher-level nightmare. Which brings us to...
Problem 6: everybody thinks high-level is easy, on the grounds that it's visibly faster. You sure
can develop more high-level functionality in a given time slot compared to the lower-level kind. So what? You can drive faster
than you can walk. But driving isn't easier; everybody can walk, but to drive, you need a license. Perhaps
that was the thing mentioned by the Awesome (Ex-Web) Developer: at least truck drivers have licenses. But I'm not sure that's
what he said. I'll tell you what I do know for sure: every second WordPress theme I tried was broken out of the box, in one of
three ways: (1) PHP error, (2) SQL error and (3) a link pointing to a missing page. WordPress themes are written in CSS and PHP.
Every moron can pick up CSS and PHP; apparently, every moron did pick them up. Couldn't they keep the secret at least
from some of them? Whaaaam! The speedy 5-ton truck goes right into the tree. Pretty high-level leaves fall off, covering the
driver's bleeding corpse under the tender rays of sunset. And don't get me started about the WordPress entry editing window.
Now, while every moron tries his luck with higher-level coding, it's not like everyone doing high-level coding is... you get
the idea. The other claim is not true. In fact, this entry is all about how the other claim isn't true. There are lots
of brilliant people working on high-level stuff. The problem is, they are not alone. The higher your abstraction level, the
lower the quality of the average code snippet you bump into. Because it's easy to hack up by the copy-and-paste method, it sorta
works, and if it doesn't, it seems to do, on average, and if it broke your stuff, it quite likely your problem,
remember?
Problem 7: it's not just the developers who think it's oh-so-easy. Each and every end user thinks he knows
exactly what features you need. Each and every manager thinks so, too. Sometimes they disagree, and no, the manager doesn't
always think that "the customer is always right". But that's another matter. The point here is that when you do something
"easy", too many people will tell you how it sucks, and you have to just live with that (of course having 100 million users can
comfort you, but that is still another matter, and there are times when you can't count on that).
I maintain low-level code, and people are sort of happy with it. Sometimes I think it has sucky bits, which get in your way.
In these cases, I actually have to convince people that these things should be changed, because everybody is afraid to
break something at that level. Hell, even bug fixes are treated like something nice you've done, as if you weren't supposed to
fix your goddamn bugs. Low-level is intimidating. BITS! REGISTERS! HEXADECIMAL! HELP!!
Some people abuse their craft and actively intimidate others. I'm not saying you should do that; in fact, this entry is
all about how you shouldn't do that. The people who do it are bastards. I've known such a developer; I call him The
Bastard. I might publish the adventures of The Bastard some day, but I need to carefully consider this. I'm pretty sure that the
Awesome Developer won't mind if someone recognizes him in a publicly available page, but I'm not so sure about The Bastard for
some reason or other.
What I'm saying is, maintaining high-level code is extremely hard. Making a change is easy; making it right without breaking
anything isn't. You can drive into a tree in no time. High-level code has a zillion of requirements, and as time goes by, the
chance that many of them are implicit and undocumented and nobody even remembers them grows. People don't get it. It's a big
social problem. As a low-level programmer, you have to convince people not to be afraid when you give them something. As a
high-level programmer, you have to convince them that you can't just give them more and more and MORE. Guess which is
easier. It's like paying, which is always less hassle than getting paid. Even if you deal with a Large, Respectful Organization.
Swallowing is easier than spitting, even for respectful organizations. Oops, there's an unintended connotation in there. Fuck
that. I'm not editing this out. I want to be through with this. Let's push forward.
The most hilarious myth is that "software is easy to fix"; of course it refers to application software, not "system"
software. Ever got an e-mail with a ">From" at the beginning of a line? I keep getting those once in a while. Originally, the
line said "From" and then it got quoted by sendmail or a descendant. The bug has been around for decades. The original hardware
running sendmail is dead. And that hardware had no bugs. The current hardware running sendmail has no bugs, either. Those bugs
were fixed somewhere during the testing phase. Application software is never tested like hardware. I know, because I've
spent about 9 months, the better part of 2007, writing hardware tests. Almost no features; testing, exclusively. And I was just
one of the few people doing testing. You see, you can't fix a hardware bug; it will cost you $1M, at least. The result
is that you test the hardware model before manufacturing, and you do fix the bug. But with software, you can always
change it later, so you don't do testing. In hardware terms, the testing most good software undergoes would be called
"no testing". And then there's already a large installed base, plus quick-and-dirty mailbox-parsing scripts people wrote, and
all those mailboxes lying around, and no way to make any sense of them without maintaining bugward compatibility (the term
belongs to a colleague of mine, who – guess what – maintains a high-level code base). So you never fix the bug. And most
higher-level code is portable; its bugs can live forever.
And the deadlines. The amount of versions of software you can release. 1.0, 1.1, 1.1.7, 1.1.7.3... The higher your
abstraction level, the more changes they want, the more intermediate versions and branches you'll wind up with. And then you
have to support all of them. Maybe they have to be able to read each other's data files. Maybe they need to load each
other's modules. And they are high-level. Lots of stuff below each of them, lots of functionality in them. Lots of modules and
data files. Lots of developers, some of whom mindlessly added features which grew important and must be supported.
Damn...
I bet that you're convinced that "lower-level" correlates with "easier" by now. Unless you got tired and moved elsewhere, in
which case I'm not even talking to you. QED.
Stay tuned for The Adventures of The Bastard.
I agree with most of what you said here but I don't see how you (By
you I mean low level devs) are immune from the High Level – Low Level
Chain of responsibility.
For e.g. If you're a device driver writer for a GPU and you can still
mess up some pipeline code and have other people dependent on your buggy
code which you have to forever maintain because the next big game pimps
your GPU and the big bosses couldn't care less.
Admittedly this level is still higher than machine level, but I
assume low level > machine level.
–
Re: C++ FQA – I agree 99.9%
I'm constantly amazed at how C++ "expert" devs are just scholars of
trivia instead of actually being able to write up decent code. Or do
they have a new title for that now? System Architect?
...
bah!
Well, yeah, you can't get out of the food chain, just choose your
position in it... If you're doing low-level, and your stuff got popular,
every move of yours can break things, so you have to watch your step. If
you're doing high-level, then until your stuff gets *really* popular,
you have to work around all those lower-level bugs/quirks; then, when
you become awfully important, the low-level crowd will actually bend
their development towards your needs.
Note that the low-level people have trouble /after/ the big success
and the high-level people have it /before/ that. If your thing is so
popular that loads of stuff depends on it and you're afraid to break
something, you're already in a very good position.
[...] On second thought, I don't know if I'd really recommend it.
Remember how I told low-level programming was easy? It is,
fundamentally, but there's this other angle from which it's quite a
filthy [...]
I wrote a blog post about this last November entitled "I'm afraid of
low level programming." http://www.litanyagainstfear.com/blog/2007/11/27/im-afraid-of-low-level-programming/
I've done a bit of reflecting on what I wrote and what you wrote, and
I'm starting to think that newcomers to software engineering can't
handle the low level well. For example, to use a calculator to do simple
math usually requires a basic knowledge of how those operators work.
Even for trignometry it helps to know and understand the sin, cos, and
tan functions, but there's no way you'd have to dive into deep calculus
topics like Taylor series and the like, which calculators use in order
to compute those functions.
I personally have had a bit of exposure to C and assembly through my
classes, and it scares the crap out of me. To know that your program
will fail due to the slightest buffer overrun or because you misplaced
one bit is just frightening, and to newcomers downright frustrating.
Low-level may be easy to you since you've been in the business (i'm
assuming) for quite a few years, but for apprentices like me, I'll stay
high level for now. If you have any suggestions as to what I could do to
understand more low-level stuff I'd appreciate it.
There are two kinds of low-level: the user space kind and the kernel
kind.
For the user space kind, the most cost effective way to become
comfortable with it I can think of is to write a C compiler (for a
subset, not the whole language). If it's relevant for you, you can
usually get academic credit for it by taking a compiler construction
course (check the syllabus; it could be called "compiler construction"
and not have a bunch of assignments about a C compiler in it). I'd go
for the whole toolchain – a lexer, a parser, an assembly code generator,
an assembler and a linker, but using an existing assembler is probably
cool, too.
The kernel kind of low-level you don't care about unless you're in
the kernel or on bare metal. There's a single insight there – most
hardware is memory-mapped. That is, you can tell it what to do by
reading/writing C pointers. From there, it's a bunch of specific rules
for each piece of hardware, and a sprinkle of crap for interrupt
handling and kernel mode CPU instructions. I wouldn't worry about it
anyway since this knowledge doesn't give you much value if you program
on top of a PC or a mobile OS. The user space stuff is what can really
help (to optimize and to debug and to hack on code bases written in C++
when they should have really been written in Java or C#).
As to fear – the whole thing isn't anywhere near, say, advanced math
in terms of complexity. It has scary failure modes (buffer overflows and
stuff), but you don't need that much brain power to handle it. Which
makes it a cheap superpower (lots of people are afraid of it). So
bringing yourself to the point where you can skim through
compiler-generated assembly and follow it, is a fairly cost-effective
investment. And if you write your own toy compiler, you'll most likely
get there.
As to years of experience: their value is mostly in
building/destroying character. But you don't learn nearly as much as you
do in school, so it's not a big deal.
This is a great post and is reinforcing my desire to learn
assembly.
Reading this led me to a strange thought – We have to hope that there
isn't a "bug" in reality (lower than the hardware), because we can't get
the source for that... yet.
To me, assembly is something which is good to be able to read (so you
can see what a compiler does), and to generate (so you can write a
compiler; although the right thing these days is usually to generate C,
not assembly, so it's mostly about understanding what an existing
compiler does).
Writing assembly is something you do for "system" code (boot loaders,
kernels, that kind of thing). For optimization, C with intrinsics should
do the trick. Assembly is gnarly to write.
The wonderful thing about high-level is, it's still never high
enough. Consider the following, the crux of what quite possibly is the
world's smallest MVC engine for web apps:
static function sendResponse
(IBareBonesController $controller) {
$controller->setMto(
$controller->applyInputToModel());
$controller->mto->applyModelToView();
}
The rest of the framework totals fewer than 60 lines, and implements
everything except "applyInputToModel()". The jury is still out as to
whether web developers will love me or hate me for introducing "yet
another level".
<60LOC? How does it applyModelToView?
This rings a bell, thanks for bringing this topic up.
As someone who codes both high-level and low-level, I agree about the
relative difficulty of high-level programming.
For me there's another aspect. At low level, I can understand "the
whole picture", which is important for becoming proficient. As you say,
you can hold the entire ARM CPU in your head. Which can't be done for a
jQuery / JS / XML & DOM & JSON / AJAX / PHP / CSS & HTML
stack. The mere fact that you have something in your head causes:
1) You to enjoy the work more, because there are no dark dusty corners
you're afraid to look into
2) Be more productive, because you spend less time worrying about stuff
you don't know and reading tutorials on another cool JS library /
framework. Nothing new in the ARM architecture. So just do the work.
I think there are people who actually do hold a large portion of the
web stack in their brains, which is quite a feat. I'd say that it takes
more space in the brain compared to lower-level kind of crud though, and
I'd guess it's still harder to work with even if you know it well,
because you have to "consult your knowledge" more often ("what if
this?.. what if that?..") – be a "lawyer" more of the time and thus a
"hacker" less of the time.
Any idiot can bend steel, but it takes a genius to come up with the
eifel tower.
What I meant to say is that high-level engineers don't care how the
plumbing gets into it, but structures like the Guggenheim or the
Westminster Palace are not thought up by plumbers or carpenters. The
people who glue and hammer are important, but it is rare when genius is
found there.
I admit, I am frustrated by the people who only know the high-level
stuff. But, I know both, and I would prefer to design on a grander scale
rather than be pigeon-holed into CPU instruction work.
I wonder what could make you "Insulted" in what I wrote; I basically
said low-level was easier to deal with, which doesn't seem to contradict
your claim that low-level is somehow a lesser domain. Of course it
doesn't support your claim, either; regardless of what I wrote up there,
I do think you're wrong, and in particular, the "right" analogy would be
comparing the "macro-architecture" of the tower to the
"micro-architecture" of the joints making it up or, still lower, the
engineering behind the process of melting the steel, or something. It's
basically a design-to-design comparison, not a design-to-"labor" one. So
while it's perfectly legitimate to prefer a "larger scale", it's
basically a matter of taste and not a matter of objective complexity
measurement.
To Insulted: the main problem of the software design/architecture is
that you don't have to be a genius to convince you boss to build a
Guggenheim. With all the sad consequences arising therefrom...
Or, in other words, in the software world any idiot can build Eiffel
tower, but it takes a genius to bend a silicon.
There's a low level error on this whole page.
i've been programming in java and c# for a while now, but i want to
get a little closer to the hardware. i read a simple internet tutorial
to get the basic syntax for c++, and i understand the basics of pointers
and memory allocation. however, what is the process by which one goes
about to locate a manipulate specific hardware implementations?
@B: I'm not quite sure what you mean, but you could say that
"low-level" breaks down into (1) raw pointers and memory management, (2)
programming in assembly/intrinsics for speed, (3) OS-level assembly,
memory-mapped hardware and interrupt handling. If you're on the desktop,
(1) and (2) happen at user-level programs for optimization and (3)
happens at kernels and device drivers, so this is where you go if you
specifically want to get to lower levels (although I'd usually be
dragged into a new programming environment because of having to do new
sort of work and not vice versa).
I agree with this guy, I like low level stuff, I want to be able to
write in assembly, but first I feel like I should learn all about how to
write gui libraries in c++, but no one seems to want to help, and the
libraries are impossible to read
Now I'm curious about how hardware tests are performed. Could
software be tested in a similar way, too? Please send answers to
egarrulo at gmail.com too, since this blog doesn't provide an option to
enable notifications about updates.
First of all, there are tools for hardware verification – formal
verification by static code analysis, and automatic random test
generation attempting to cover as many execution paths as possible. I
don't have experience with these, so despite having reasonable success
in hardware testing, I'm in one potentially important way not qualified
to discuss it (actually I'm in a frame of mind where I prefer to
actively avoid such tools, but I don't have experience to back it up).
The way I've seen hardware tested is you implement a software emulator,
usually implemented quite differently than the hardware model and more
simply, without the parallelism inherent to hardware and with little
optimization; then you generate random tests, as well as tests for some
extreme cases, run on both and compare the results (usually raw register
or memory snapshots at the end of the run), the tricky part being to
identify the extreme cases and to randomize from an interesting
distribution. This is different from software testing in that you have 2
full-blown implementations, and you run plenty of tests on plenty of
machines for plenty of days; more on those differences below.
There are two ways in which hardware is easier to test than software,
IMO fundamental enough to preclude the testing of software along the
lines of hardware testing.
Firstly, hardware tends to be designed to minimize dependence on
accumulated state. For instance, the execution of a CPU instruction will
depend, in a complicated pipelined CPU, on what instructions executed
previously, but only on a relatively small window; a Markov chain, if
you like. It is not the only way to design hardware and I've seen real
hardware designs that were very hard to test because of executing long
processes with data-dependent decision making all in hardware, and in
fact those tended to be manufactured with bugs. It is also not
impossible to design software such that it depends on a limited amount
of state; class invariants are a simple example of being able to free
oneself of thinking through many different cases of what sequences of
events are possible and would the program work correctly in all these
cases. So basically it's a question of "designing for testability"
(though this term is reserved for other sorts of testing in the HW
design jargon, namely, testing the production rather than the design
itself); arguably, the problems solved in hardware tend to be better
suited for testing because of there being less context affecting the
behavior of a component. A related question is where the complexity is
that bugs come from. In hardware, the complexity comes from
implementation – vast parallelism and tricky speed or space
optimizations, whereas the spec is usually relatively simple, so that a
reference implementation and a set of tests giving good spec coverage
are also relatively easy to produce. In software, the complexity quite
often comes from definition, where it would be silly to write two
implementations and if one did, it would be hard to tell which one
behaved correctly, because there are actual holes in the spec. And in
fact software that is not unlike hardware in the sense of being easy to
specify but hard to implement – say, a classifier taking a region of
interest in an image and saying if it's a snapshot of a vehicle or not –
can be and frequently is tested similarly to hardware: a gold model
(manually marked database in the case of machine learning) and tests
performed for hours or days.
Secondly, hardware is easier to test because it is expensive not to
test it. That is, people who specify hardware are more careful not to
overload the spec with features with unclear interactions, and people
who control its schedule would never think of optimizing away the
testing. That's because of the palpable high price of a hardware error
compared to a very uncertain price of software errors (in reality you
can work around many hardware bugs but hardware bugs sound like
something extremely scary). This social aspect of the problem seems to
me not less critical, if not more critical, than the technical aspect
above.
In short, I think software is destined to be orders of magnitude more
buggy than hardware.
P.S. Is there actually a way to configure WP so that it notifies you
when someone replies to your comment (as opposed to anything on the
site)? WP doesn't seem to even know who talks to whom, so I figured it
couldn't properly notify, either.
If i understand correctly, lower level is easyer because :
- you have a clear idea of what you are trying to achieve.
- you depend less, on the clear ideas of others
- you can actually implement your clear idea pretty fast into some high
level languages ..
- and then, you sometimes have a clearer idea of what could be actually
going on (whenever your stuff is running).
I grew up with the illusion that Assembly langage was low level. At
first i knew it was not, then i just forgot about it. Because in most
case you "can" learn the assembly langage, and even understand it
(although it sometimes becomes difficult like with modern Intel cpus,
where i have no idea of what's going on any more). And 20 years later, i
suddenly realize (again) that i never did any low level thinking (i
don't even knwo what a Verilog statement is).
I have seen some high level assembly langages, and i really wonder
what C has to offer in terms of speed of developpement that is better
than those. C is nothing more to me than a macroassembler (and a lot of
people do look at the disassembled version of their C software).
What "high level" really lack, is some standard way to do things. It
really should be all about standardazing. As i see it everything (high
level) should be written out of a standard assembly langage. So you
would have the tool to do fast whatever simple thing you need to get
done. But would also have means to "understand" what you are doing.
Where understanding would be "to know what standard assembly
instructions your program maps to".
But now that i reallize that assembly is such a high level beast, i
wonder if one could not work up something better than that as the basic
low level representation any high level langage can map their result to.
What's the use in cutting off the understanding of high level
programmers from what it physically means ?
The trouble is that there's an infinite amount of way to build a
physical (virtual) machine in the first place. There's a Tower of Babel
problem with languages that can't be solved, namely, that there's always
an infinite amount of ways to spell things and no way to choose.
I couldn't agree with this post any more. I personally develop with
C++ (consistently for about 4 years now) but just don't see how I could
have understood C++ without writing my toy compiler. Things such as how
come there exists a this pointer? I just could never thought of a
thorough explanation without attempting to implement a compiler. I just
don't think a developer can truly call themselves a developer
(especially for C/C++) without knowing how a compiler would work.
Another example is function pointers. Something like const char *(*f(int
index))(const double *const val, struct tree *); Just how can someone
understand the meaning of (to an intuitive level) all this without
trying to implement a compiler and thus be able to parse a declaration
into an abstract syntax tree? May be its just me but I don't see how its
possible.
Oh sigh, how I've longed to have a complete understanding of
hardware/bios interrupts, for kernel space programming. But there is no
real, 'go do this br0, it's l3g1t I swear!!!!!!!!11111', like we higher
level programmers have been led to learn by.
For instance; I can barely learn on my own without a video tutorial
holding my hand the whole way. Personally I find my dependency terribly
disgusting, but it's just the world now day's... ANYWHO, let's stop the
banter and get to the questions:
1: The main thing that I don't get, is when getting the BIO's
intertupt for let's say, keyboard input, you would write something
like:
mov ebx, int8030
(granted that's the first thing that popped into my head, I know it's
wrong)
I don't understand _what_ the int8030 is. I can understand that
you're moving the int8030 instruction into the 32 bit base register, but
is that it?
2: On my browsing of several Operating System Development forums
(OSDev.org mainly (Yes, I do want to be an OSDev)), they seem to throw
hissy fit's whenever one of their members uses said BIOS intrupt. Is
there a reason to use something else? Or are they just PM's-ing?
3: I'm trying to limit my time with ASM as much as possible, as it
tends to bend my mind in ways that I'm sure weren't meant to happen,
(hoping just to use ASM for the interurban and bootloader (C++ for the
rest)) but alas as a hopeful OSDev, I need to know ASM. So, any good
ways to learn? I've been reading the Intel 8086 processor manual, "The
Art of Assembly", and CODE by Charles Petzold, but I'm still looking for
better options out there. And as you seem to hold a fairly formidable
grasp on ASM, I was hoping you would tell me _where_ you learned.
Kthnxbai
Unfortunately, my knowledge of x86 userspace assembly is scarce, and
my knowledge of the parts of x86 that interest BIOS/kernel hackers is
almost non-existent, so I can't say anything very helpful, really; my
assembly experience mostly comes from RISC and DSP processors. I'm sure
any experienced low-level x86 developer would be way more helpful than
me.
There is also one other thing that makes low level easier: algorithms
used in low level stuff are usually not too complicated. For example to
optimize low level program you need to read assembler and reorganize
your C code to generate better assembly. Or simply write part of code
directly in assembler.
Try out database optimization. To optimize database you have to know how
really it works. It means you have to how database achieves ACID
properties of its transactions, how B-trees and other data structures
works, etc. It is highly algorithmic and complicated stuff.
True in many cases – although optimizing FFT on a bare metal target
can also be complicated, the difference from databases being that it's
easier to be fully aware of what's going on.
Agree in DSP there are lot complicated math. But DSP is rather
exception than a rule. Most low level programmers that I know does not
care about algorithms and math. It simply does not apply to its
work.
Btw I am working as high level programmer (now I am involved in Machine
Learning project) but I am preparing to switch to low level. I have
started learning Windows Kernel and it is really a lot of fun for me.
Things are more predictable and you have control over entire machine. It
stimulates my imagination ;)
I would agree with this post 100%
Low level is easier, I would take Hardware, Asm or C any day over some
web front end. Possibly why i am in Electronic Engineering rather than
Computer Science.
There is just too much undefined behaviour at the high level
(Javascript is a great example), also too many bugs that are beyond the
developer's control. Even at the level of C i've had code fail due to
compiler bugs.
If a CPU has such undefined behaviour, then almost nothing works on it,
and no one will use it.
On the other hand, ASIC engineering is tougher than low-level
programming IMO – not necessarily fundamentally as much as practically,
what with all the quickly changing process characteristics and the time
it takes to experiment with anything (simulation latencies,
synthesis/placement&routing latencies, etc.) At the electrical level
rather than logical level, there's also a lot of ugly and not so
repeatable stuff going on (signal/power integrity, etc.) Perhaps the
closer you are to actual chips, the worse (IP companies are a lot like
software companies and are less exposed to this type of horrors, I
think.)
You sound like me... Hey, wait a minute, who are you and what have
you done with me? And how did you keep my comments from me for over 4,
uh, 5 years??
Good post, I chuckled a lot... in that sad sort of way because it is
true. I think I yelled a few times too.
I'm a kid (9) and I've been thinking assembly is not so low level In
a aspect because it does have its depancys
Oh I can't be bothered what I'm saying is its a bit hard for a OS dev
because assembly there's three types most of them quite different x86
x64 ARM so really getting a bootloader tutorial for ARM x86 x64 (I would
be doing x64) then you could write the rest in c standard call me a noob
oh and don't use cpp because it is harder for osdev because apperantly
it requires a runtime which would not be good for OS because the minimum
OS does not have a cpp runtime (oh yeah can anyone tell me a great place
to learn c I'm following the cprogramming on but tutorials point is
where I learnt python)
Oh yeah I've got a quite a good understanding of stuff like loops a
bit of networking
http://c.learncodethehardway.org/book/ is where
hipsters appear to be learning C these days.
Very interesting post, thanks. My background is exclusively high
level, so the low level stuff is pretty intimidating to me.
I do intend to learn the low level stuff in the near future. I think
that the more levels of abstraction you understand, the better.
I enjoyed this post a lot, though I'm no great programmer at all (low
or high level...)
I'm going to have to call bull on the HW being good. Sure it may get
caught in testing, but it may also get punted!
I've seen shipping chips (in the millions!) that will randomly halt
when being brought out of a certain low power mode. Suggested
workaround? "Don't use that mode."
Gee thanks.
EE's have bugs in their schematics as well. It happens. Hopefully all
the major bugs get worked around before the final design verification
step. Sometimes not and software has to work around the bugs.
Sometimes a selected component is just crap. Lowest bidders exist for
a reason.
Very nice piece of craftsman writing. Not very surprising by the guy
that put together C++ FQA where I basically learnt a few years ago the
C++ I know. I did not find such FQA for other languages, even if it's of
tremendous help. What is looking like a JavaScript FQA is wtfjs.com. If
you skim over wtfjs you see that it's "only" about implementation
weirdness that maybe shared by all JS VMs. I won't dive into JS design
anymore by just saying that it's *far* from being the best language to
do GUI development, in browser or outside browser, I need to say because
most people think it relates to the execution environment in some way,
except it doesn't at all. Doing JS GUIs for desktop outside browser will
be as much crap as it is in browser even if [,,,].join() were not legal
code. JS Prototype OO system as it is implemented is not a good fit for
GUI dev or any callback based code. With the advent of ECMAScript 6
generators, the principal flaw of JS will become invisible if you only
use generators. The flaw I'm talking about is the fact that if you pass
a method as a callback you loose the reference to the object the method
relates too. Namely when the callback is called, "this" becomes "window"
object in the browser. This yet another flaw, but the flaw that makes JS
pseudo-class based code not as easy to write and maintain as it could
be. Why do I care? Everybody knows about it! Wrong. It's kind of a
secret. I'm kind of literate in web based technology (I build with
Internet my first social network 10 years ago) and I discovered that
only a few monts ago. This doesn't really suprise me, since I started
professional programming 2 years ago. But still. For me and others this
JS flaw is a major blocker because class based code is not usable and
object oriented code is not good enough because you can not override
"methods". Best JS code that can be written is as expressive as C code
except it's dynamically typed. A bad abstraction or if you prefer a bad
design is the worst kind of bugs I found. Bug in the lower levels of the
stack are OK. There is no bullet proof production tested GUI code design
that I know of. I don't know every bits the last incarnation of good
framework in the GUI land called AngularJS. But it has issues and,
anyway has JS issues. Most people think once you know what
Model-View-Control is, you can start doing good GUI applications. WRONG.
MVC is a software pattern invented in the 70's. As of right now there is
no clear idea what's a good design for a GUI. MVC is an idea, it's
abstract and of course, as usual doesn't fit in all situation. And there
is so much in MVC, http://en.wikipedia.org/wiki/Model%E2%80%93view%E2%80%93controller
is a good start, have a look at http://addyosmani.com/resources/essentialjsdesignpatterns/book/#detailmvcmvp.
Why I'm so much about design? To my mind, this where is lying the
maintenability of any application. And at least, in my current job, it's
a road blocker. This is not only about the design of the propritary GUI
framework but also the components the GUI has to talk to and the way it
has to interact with them. I won't dive too much in the design of the
thing just stating the whys of it:
0) Political: first incarnation of the stack was using linking and
when a core dump was generated it had the name of the GUI process. So it
was GUI's fault of course. Solution: break every component into
processus. coredumps have now the name of the processus that failed.
Still GUI people need to dive into their logs and/or the coredump of the
component that crash to assert that it's not GUIs fault, if you can't
assert that the bug is in another component, the maintainer component
won't care much and you will need to work around it... And there is one
more processus required to handle all the messaging between all the
processus. Instead of centralized architecture around the GUI, it is
centralized around the messager making it possible for every component
to talk to every other component without the GUI knowing about it.
1) Marketing (maybe this is actually 0): marketing people can describe
the platform using neat words like "message passing", "security" and
other things that I was said sounded good to clients. The only new words
are "processus" and DBUS.
2) Performance: The GUI is written in a high level language which has no
battle tested compiler and which reference interpreter is said to be
slow. So critical code must written in C or C++ even if there is no
clear advantage in terms of speed or memory usage.
3) Intelligence: GUI people are stupid or just Python people doing GUIs
in embedded, because otherwise if they were doing Web, NodeJS is the
best tool so still stupid to use Python. So, no intelligence in the GUI.
Everything dubbed smart must be done in lower level components which are
written in C/C++ because people writting them are smart maybe because
those people are more experienced which is generally true in my company
but remains the question why most GUI people have less than 1 year of
dev experience in my company.
4) Social: People (among which non developpers but other developpers
that are C/C++ dev and that most of the time do not like Python) can
relate easly to GUIs, because they can "see it". So GUIs are dubbed
easy.
5) Python: Python is slow and consume a lot of memory. Why don't we use
C++ or C...
Because of all of that the GUI coding is made very difficult. Without
talking about the code that creates the general flow between the
different part of the GUI or the graphical part of the code which deals
with display the correct rectangles but everything that is under it, the
middle end (http://blog.getify.com/tag/middle-end/) part of it,
is difficult to write.
In the above part I was explaining what was particulr to my case
working on a GUI running on 5 hardwares in the embedded space for a
product that is used by more than three million people generating LOADS
of money.
I will stop here everything particular to my case and only talk about
what makes GUI difficult: We need to work on an abstract level dealing
with a lot of abstractions. Getting a design that will hold for the
entire life of an application is from what I had the occasion to assert,
a difficult task. I'm not saying it's not true for other fields. The
particular thing here is "abstraction". I'm not convinced the lower
level code needs a lot of abstractions, otherwise you would be using
more C++ than C. Needing abstraction is OK, but those abstraction are
not *known* by people not even people claiming doing GUI. Think about
it, what makes a GUI code? What's the last time you heard about a state
machine or a pub/sub pattern in javascript code or desktop code?
Everything people know about the actual code that are GUIs are "widgets"
and sometime callbacks or events sometime there hear about an object
tree and don't know that they are actually several object trees and some
know about layout but not relow or layouting. Middle-end anyone?
I know no abstraction or patterns or optimization tricks used in
lower level codes that have no incarnation in GUIs. Cache invalidation,
there is. Binary or other space/time optimization there is. Even cache
miss are taken care of in game dev at least. Algorithms? Low level code
is code with algorithm CPU intensive task, but so far from what I've
seen those algorithm are backed by standards and many other
implementations. Difficult algorithm even CPU intensive algorithm are
best dealt with higher level languages even if it is only(!) gluing
together C/C++ code.
Sorry it's a long comment. But I'd like also to comment on the
different problem raised in this article:
- the stuff below you is huge at higher levels: GUI don't always
think or need to think everything top down like in game dev or some
incarnation of embedded.
- HTML, CSS, JavaScript, XML and DOM are a lot of things: not really,
they are data & algorithm.
- a CPU is somewhat smaller: has I/O with several components in
different modes. Web GUI I/O are one user or one server.
- Keeping spec in your head: True.
- «the stuff below you is broken» this is not alway below. But yes. This
is also true at lower levels. The thing at higher levels is that people
don't want or need to fix them. Also the pratice is less documented or
"obvious".
- «Sometimes preventing the lower-level component author from fixing the
bug, since that would break yours and everybody else's workaround.
Whoopsie.» True! Aweful. Especially since reworking code is not granted
or supported by management.
- «Hardware is written in symbolic languages, with structured flow and
everything.» This is (only) tooling problem but True.
- «Don't do something, um, shit» True! Especially when maintainer (if
any) don't know their component well enough... Or don't want you to
know.
- «surely no single debugger will ever be able to make sense of that.»
This is (only) tooling problem but True. see http://www.sysdig.org/
- «But leaves, those can grow in whichever direction they like.» It's
more like a graph than a tree and is kind of controlled but you are
right.
- «HTML and Qt are both essentially UI platforms» Abstractions are at
some level the same. With the wrong abstraction porting can be very
difficult but not impossible (as usual ;)
- «for many lower-level problems there's The Right Answer (IEEE floating
point)» True!
- «everybody thinks high-level is easy» because it's visible and a
particular set of abstraction not because it's visible. I got bug
reports for bugs of things handled by other components but since it was
visual it was GUI bug: 3D engine and game GUI is completely different
and don't relate that much.
- «There are lots of brilliant people working on high-level stuff. The
problem is, they are not alone.» It's not as if they were not stupid
lower level developpers the kind of people that think that dynamism is
impossible to compile into assembly code or not legal compiled code.
There seems to be a general belief that lower level means more difficult
and more smart people. Crazy stuff.
- «Each and every end user thinks he knows exactly what features you
need.» I was asked to change my code with an algorithm provided by the
client which was worse than every code that existed in the company so
far. Instead I asked to try what was rejected in the intial code 6 month
before, turning a 3 into a 8 as the value of a timeout. I didn't sleep
for 2 days before being able to ask for it.
- «BITS! REGISTERS! HEXADECIMAL! HELP!!» True.
- «Some people abuse their craft [and position] and actively intimidate
others.» Aweful. Real story: «- Why don't we write the GUI in C? –
Because it can't abstract things easly – Why not C++ then? – Because you
did not read C++ FQA?! – What is C++ FQA?»
- «High-level code has a zillion of requirements» True, but the worse is
that those are fuzzy and not handled correctly by the contract, I mean,
It's ok to have fuzzy spec but the dev (any dev) shouldn't have cope
with it all time and alone.
- «the chance that many of them are implicit and undocumented and nobody
even remembers them grows» True. Aweful.
- «As a low-level programmer, you have to convince people not to be
afraid when you give them something.» Doesn't work anyway ;)
- «As a high-level programmer, you have to convince them that you can't
just give them more and more and MORE» True, even if I my company the
less GUI does the more they are happy even if it's agains't there own
benefit.
Also so far the most "enlightened" or just fair with GUI people are
people who actually were building chips.
Insulted wrote:
«I admit, I am frustrated by the people who only know the high-level
stuff. But, I know both, and I would prefer to design on a grander scale
rather than be pigeon-holed into CPU instruction work.»
This is not about "'high' not knowing 'low'" but the other way
around. What about people who only knows about low level stuff? What
«design on a grander scale» is meant to mean exactly? This kind of
talking is done by pseudo-technicals actually marketing people that
knows shit about what development really is. «- We will need a fast
database – Seriously ? Dear (genius) architect, which db shall I use? –
One that stores stuff fast». I like grander scale design and I liked
this post and didn't feel insulted. I would prefer being in a position
to do what can be qualified as "grander scale design" but that will not
mean that I will leave what you seem to think is a "ressource" task only
to the "ressource". And if I would like to be "architect" is not only
because I like it, it is also because so far those that I had the
occasion to see the writings or any production in general is shit. And
their skills both technical and social are shitty. If you know some
dubbed architect that doesn't qualify to the previous description let me
now.
«The people who glue and hammer are important, but it is rare when
genius is found there.»
Rare? How would you know anyway? As if you need to be genius to be
architect.
I'm considering retiring from "professional" dev because of shitty
people like you.
Thanks again for the good write up Yossi Kreinin.
"I'm such a great coder that I find stuff most people think it hard,
easy!"
That's all I got from this. If you're going to do something based on
its level of difficulty I would consider it more admirable to have
chosen something you deem difficult.
But why choose on that basis? I used to do pretty low level RTOS
stuff and I hated it. Everything had to be just so. That's not hard,
it's annoying.
I prefer higher levels now because my time is too valuable to be
chasing down errant bits.
YMMV.
LOL the comments are hilarious! i did some coding in Assembly during
the Precambrian era and it's easy as fuck! totally agree with Yosef.
yosef, i must say, your blog is one the best tech blogs out there.
you make excellent points. i remember back in the day when i just came
fresh out of school, i thought, client-side GUI development (in C++
(hey, i was young and inexperienced)) would be more interesting than
server. it’s all about the interface, man. ;)
well, that was just a few years after HTML was out and Javascript
just lurched on the scene. i took one look at the badly mutated beast
and ran for the hills!
that’s when i realized i will never be a web developer unless things
improve, and i mean, improve radically. and what’s it been? almost 20
years later and it’s barely starting to look better. but the mutant
doesn’t give up that easily. it’s like a parasitic worm clinging to
everything it touches, so much so it has now infected the server space
with that foul sticky spaghetti blob called node.js. can you believe
this shit? server-side coding in Javascript?! WTF was ryan dahl
thinking?! will the insanity never stop? (that is the problem in this
industry: too many juveniles with no tech wisdom, ie enough experience
to know better, running around getting excited by the most worthless
shit they come across if it feeds their ego's needs for validation. it's
so bad that Google even had to hold seminars to cool down their
psychologically dysfunctional 'geniuses' and remind them it's not about
showing off who is the smartest peon in the whole cubicle barn. it is
ultimately a psychologically problem that undermines the whole field of
IT. if not, we would by now have already had better OSs, dev
environments, not to mention languages. for god's sake, lest we forget,
LISP came out in the 50s! and where are we today? still dicking around
with imperative/procedural languages (with a few exceptions, sure)) (but
still... it is annoying beyond belief.) ok end of rant.)
back then i was lucky enough that Java came out when i was looking
for my first job. needless to say, Java was more alluring than C++ (hey,
i was still young and inexperienced!) and the server side became my new
horse (i trust the reasons are obvious ;)).
but again, the mutation curse reared it’s ugly head and Java became
that bloated beast we all love to hate, at least those of us in the
resistance.
one can only hope that Dart (and Polymer) or Clojurescript will
euthanize the parasite, and that Parallel Haskell, Clojure, Flow, and
Rust will eradicate the monstrosities of yesteryear (Java, C++) or some
other magnificent FP language.
until then, i walk the wilderness with a few lone gunmen still hoping
for the promised land.
Most of my experience lies in front- and back-end Web development (as
a hobbyist), though I haven't done much in the past year or so. I will
say that Web development does a great job at putting security in the
forefront.
For me, my desire to learn and write code was driven by my curiosity
about how computers work, which generally kept leading me toward
lower-level concepts.
Your post here (I realize was made a long time ago) does resonate
with me. I will say the increased distance between learning stuff and
getting programs running can be a jarring contrast when moving from
higher-level environments to lower-level. There can be several things
you need to learn, in addition to, say C.
You're not far off when you say every idiot *did* pick up CSS and
PHP, but I'd say it's more accurate they picked up PHP snippets and
managed to learn just enough HTML to prevent their browsers of choice
from imploding.
My first experience attempting to travel lower-level was with C. I
tried to cobble together information on the first C tutorial I could
find and some sparse information on the Win32 API. At the time, I didn't
know C, the Win32 API, or even how to use the toolchain! Needless to
say, I felt very overwhelmed. I didn't know how to separate the language
from the environment at the time.
When I finally learned these things over time, however, it was a
tremendous feeling of accomplishment.
I always thought lower level was more difficult... It makes no sense
how web development is harder than low level development, a lot of work
is already done for you in higher level things. Theres a reason we have
more web developers than operating systems engineers...
One reason is that you need more web apps than operating systems. As
to a lot of work being already done for you – sure, but the flip side is
that you're expected to get much more work done on top of it all much
more quickly.
Another thing – a bad web dev will ship something that sorta works, a
bad OS developer might not ship anything remotely close to working. In
that sense – low-level is harder. All I'm saying is that a good
low-level developer has it easier than a good high-level
developer, who needs to build much more upon a much quirkier
foundation.
you all dont know what ur talking about so shut up
Very interesting article. While I don't necessarily think that lower
level programming is – in practice – easier than higher level
programming a lot of the time, I do agree with a lot of the points
you've made (and generally, that low level programming can be a /lot/
easier and less scary than people imagine). I'm hoping to try and convey
these ideas to newbies in my programming video courses at https://www.sourcecrunch.com/
TL;DR conclusion: „Simple“/KISS software is actually much more
*comlex*, and hence less elegant and hence with more chances to have
bugs.
High level isn’t the problem. Shitty inelegant, un-emergent tacking-on
mess programming is. (And the WhatTheFuckWG are the Elder Gods of
this.)
"The Right Answer (IEEE floating point)" – ??? "NaN == NaN" is false
for IEEE floating point. "<" is not mathematical ordering
relation
Post a comment