Animals have a uniform, closed architecture. The human brain is an open platform; people get by using a wide variety of techniques called "professions". The flexibility has its drawbacks. We aren't tuned for any particular profession, and apparently that's why everybody develops some sort of profession-related problems. Proctologists reportedly turn rude as time goes by. Rock stars live fast, but die young. Hunters in the African deserts get to chase antelopes for a couple of days. Antelopes run much faster, so you would never try to chase one; but the hunter knows better – compared to him, the animal lacks stamina, and will get tired, and then he can kill it. But sometimes the hunter has the misfortune of chasing a particularly strong antelope, in which case he still won't be able to get close enough at the end of the second day. But having wasted all that energy, he now certainly has to refuel, so he settles for a nearby half-rotten corpse. The effect of that sort of meal on his digestive system is one problem that comes with his profession.
Programmers develop their own problems. Today, we'll talk about AI problems some of us are having. As you probably already know, but my trademark thoroughness still obliges me to say, AI stands for "Artificial Intelligence" and comes in two flavors, "deterministic" (like minmax) and "statistical" (like SVM). The combined efforts of various researches lead to an important breakthrough in this field, known to meteorologists as "the AI winter". This is the season when you can't get any VC money if you mention AI anywhere in your business plan. During this season, an alternate term was invented for AI, "Machine Learning". I think that the money/no money distinction between "ML" and "AI" isn't the only one, and that in other contexts, AI=deterministic and ML=statistical, but I don't care. In real systems, you do both. Lots of things labeled as "AI" work and are useful in practical contexts. Others are crap. It's always like that, but this isn't what I came to talk about today. By "AI problems", I didn't mean the problems that people face which require the application of methods associated with the term "AI". I meant "problems" in the psychiatric sense.
A certain kind of reader will wonder whether I have the necessary qualifications to deal with a psychiatric issue so advanced. My credentials are humble, but I do work on hairy computer vision applications. The general problem computer vision deals with (identify, classify and track "objects" in real-world scenes) is considered "AI complete" by some, and I tend to agree. I don't actually work on the AI bits – the algorithms are born a level and a half above me; I'm working on the hardware & software that's supposed to run them fast. I did get to see how fairly successful AI stacks up, with different people approaching it differently. Some readers of the credential-sensitive kind will conclude that I still have no right to tackle the deep philosophical bullshit underlying Artificial Intelligence, and others will decide otherwise. Anyway, off we go.
The AI problems make a vast area; we'll only talk about a few major ones. First of all, we'll deal with my favorite issue, which is of course The Psychophysical Problem. There are folks out there who actually think they believe that their mind is software, and that consciousness can be defined as a certain structural property of information processing machines. They don't really believe it, as the ground-breaking yosefk's Mind Expansion Experiment can easily demonstrate. I'll introduce that simple yet powerful experiment in a moment, but first, I want to pay a tribute to the best movie of the previous century, which, among other notable achievements, provided the most comprehensive treatment of the psychophysical problem in the popular culture. That motion picture is of course The Terminator, part I and, to an extent, part II. World-class drama. Remarkable acting (especially in part I – there are a couple of facial expressions conveying aggressive, hopeless, cowardly and impatient stupidity previously unheard of). Loads of fun.
Back to our topic, the movie features a digital computer with an impressive set of peripheral devices, capable of passing the Turing test. The system is based on Atari hardware, as this guy has figured out from the assembly listings cleverly edited into the sequences depicting the black-and-red "perspective" of the machine. According to the mind-is-software AI weenies, the device from the movie has Real Consciousness. The fascinating question whether this is in fact the case is extensively discussed in the witty dialogs throughout the film. "I sense injuries", says the Atari-powered gadget. "This information could be called pain". Pain. The key to our elusive subject. I'm telling you, these people know their stuff.
The mind-is-software approach is based on two assumptions: the Church-Turing thesis and the feelings-are-information axiom. In my trademark orderly fashion, I'll treat the first assumption second and the second assumption first. To show the invalidity of the feelings-are-information assumption, we'll use yosefk's Mind Expansion Experiment. It has two versions: the right-handed and the left-handed, and it goes like this. If you're right-handed, put a needle in your right hand and start pushing it into your left arm. If you're left-handed, put a needle in your left hand and start pushing it into your right arm. While you're engaged in this entertaining activity, consider the question: "Is this information? How many bits would it take to represent?" Most people will reach enlightenment long before they'll cause themselves irreversible damage. Critics have pointed out that the method can cause die-hard AI weenies to actually injure themselves; the question whether this is a bug or a feature is still a subject of a hot debate in the scientific community. Anyway, we do process something that isn't exactly information, because it fucking hurts; I hope we're done with this issue now.
Some people don't believe the first of the two above-mentioned assumptions, namely, the Church-Turing thesis. Most of these people aren't programmers; they simply lack the experience needed to equate "thinking" and "doing". But once you actually try to implement decision-making as opposed to making the decision yourself, your perspective changes. You usually come to think that in order to decide, you need to move stuff around according to some procedure, which isn't very different from the method of people doing manual labor at low-tech construction sites. Thinking is working; that's why "computational power" is called "power". I've only heard one programmer go "...but maybe there's a different way of thinking from the one based on logic". I couldn't think of any, except from the way based on psychoactive chemicals, maybe. "A different way of thinking". To me, it's like arguing that you can eat without food or kick ass without an ass, and I bet you feel the same way, so let's not waste time on that.
Next problem: some people actually think that a machine will pass the Turing test sooner or later. I wouldn't count on that one. Physicists claim that a bullet can fly out of one's body with the wound closing and healing in the process, because observations indicate that you can get shot and wounded, and if a process is physically possible, that same process reversed in time is also physically possible. It's just that the probability of the reverse process is low. Very low. Not messing with the kind of people who can shoot you is a safer bet than counting on this reversibility business. Similarly, the Church-Turing claims that if a person can do it, a universal computing device can emulate it. It's just the feasibility of this simulation that's the problem. One good way to go about it would be to simulate a human brain in a chip hooked to enough peripherals to walk and talk and then let it develop in the normal human environment (breastfeeding, playing with other kids, love & marriage, that kind of thing). The brain simulation should of course be precise enough, and the other kids should be good kids and not behave as dirty racists when our Turing machine drives into their sand pit. If the experiment is conducted in this clean and unbiased way, we have a good chance to have our pet machine pass the Turing test by the time the other kids will be struggling with their IQ tests and other human-oriented benchmarks.
Seriously, the human language is so damn human that it hardly means anything to you if you are a Turing-complete alien. To truly understand even the simplest concepts, such as "eat shit" or "fuck off and die", you need to have first-hand experience of operating a human body with all of its elaborate hardware. This doesn't invalidate the Church-Turing thesis in the slightest, but it does mean that automatic translation between languages will always look like automatic translation. Because the human that can interpret the original that way clearly lives inside a box with flashing lights, a reset button and a ventilator. For similar reasons, a translation by a poorly educated person will always look like a translation by a poorly educated person. I know all about it, because in Israel, there's a million ex-Russians, so they hire people to put Russian subtitles into movies on some channels. Unfortunately, they don't seem to have any prerequisites for the job, which means that I get to read a lot of Russian translations by morons. Loads of fun. These people equipped with their natural intelligence barely pass the Turing test, if you ask me, so I keep my hopes low on Turing-test-passing AI.
Moving on to our next problem, we meet the people who think that we actually need AI. We don't. Not if it means "a system that is supposed to scale so that it could pass the Turing test". And this is the only thing AI means as far as I'm concerned here. We already have "artificial intelligence" that isn't at all like our natural intelligence, but still beats our best representatives in chess, finds web pages, navigates by GPS and maps and so on. Computers already work. So the only thing we don't have is artificial intelligence that simulates our own. And this is as tremendously useless as it is infeasible. Natural intelligence as we know it is a property of a person. Who needs an artificial person? If you want to have a relationship, there's 6G of featherless two-leg Turing machines to pick from. If you want a kid to raise, you can make one in a fairly reliable and well-known way. We don't build machines in order to raise them and love them; we build them to get work done.
If the thing is even remotely close to "intelligent", you can no longer issue commands; you must explain yourself and ask for something and then it will misunderstand you. Normal for a person, pretty shitty for a machine. Humans have the sacred right to make mistakes. Machines should be working as designed. And animals are free to mark their territory using their old-fashioned defecation-oriented methodology. That's the way I want my world to look like. Maybe you think that we'll be able to give precise commands to intelligent machines. Your typical AI weenie will disagree; I'll mention just one high-profile AI weenie, Douglas Hofstadter of Gödel, Escher, Bach. Real-life attempts at "smart" systems also indicate that with intelligence, commands aren't. The reported atrocities of the DWIM rival those of such precise a command as "rm .* -rf", which is supposed to remove the dot files in the current directory, but really removes more than that.
Finally, many people think that AIish work is Scientific and Glamorous. They feel that working on AI will get them closer to The Essence of The Mind. I think that 40 years ago, parsing had that vibe. Regular, Context-Free, automatic parser generation, neat stuff, look, we actually know how language works! Yeah, right.
You can build a beautiful AI app, and take your experience with you to the next AI app, but you won't build a Mind that you can then run on the new problem and have it solved. If you succeed, you will have built a software system solving your particular problem. Software is always like that. A customers database front-end isn't a geographical database front-end. Similarly, face recognition software isn't vehicle detection software. Some people feel that mere mortal programmers are biting bits, some obscure boring bits on their way to obsolescence, while AI hackers are hacking the Universe itself. The truth is that AI work is specialized to the obscure constraints of each project to a greater extent than work in most other areas of programming. If you won't take my word for it, listen to David Chapman from the MIT AI Lab. "Unlike most other programmers, AI programmers rarely can borrow code from each other." By the way, he mentions my example, machine vision, as an exception, but most likely, he refers to lower-level code. And why can't we borrow code? "This is partly because AI programs rarely really work." The page is a great read; I recommend to point and click.
As I've promised, this wasn't about AI; it was about AI-related bullshit. And as I've already mentioned, lots of working stuff is spelled with "AI" in it. I've been even thinking about reading an AI book lately to refresh some things and learn some new ones. And then lots of AI-related work is in Lisp. They have taste, you can't take that away.