Microsoft Will Fluff Your Pillows

Microsoft is offering it’s own version of AI. Probably the first sure sign that AI is never going to be anything really useful.

From Microsoft’s description, sounds like Google on amphetamines. It will probably take more work to keep it from going off in the wrong direction than it would be to just do the task yourself. And to get it to work correctly, you probably have to provide it with the same level of detail you’d have to provide if you were doing it all yourself. Then you’ll have to double-check the results to make sure it doesn’t go off and do something really crazy.

There will be a lot of “guardrails”, which tells you that it really isn’t “AI”; just algorithms on speed. It will solve problems that are easily solved without AI and fail the minute it is confronted with a real problem.

There is a SCTV sketch in which Dr. Tongue turns a bunch of stewardesses into his zombie-slaves. Turns out the zombies behave exactly like stewardesses: can I get you another coffee? Fluff up your pillow? Would you like a snack tray? Is your seat comfortable?

“Is there turbulence ahead?” Let me fluff up your pillow again.

Eventually Dr. Tongue wishes they would go away.

Why the Suit?

This week’s notes:

There it is again– a man wearing a dark pants and a matching jacket and a light shirt with a dark cloth tied tightly around his neck. A man in a suit. What is a suit? Why do powerful men all around the world wear one? Why, for heaven’s sake, do Japanese businessmen wear suits? Why did Malcolm X?

There has been a bit of noise this week about the IBM computer that supposedly defeated some of the top human Jeopardy Contestants. I have rarely heard such unmitigated bullshit in the past few years.

Consider this:

The computer was allowed to store the IMDB and several encyclopedias including Wiki on it’s hard drives. The human was not even allowed to use Google.

The computer did not express the slightest desire to play the game or win. The IBM programmers did. They cheated by having the IMDB and Wiki with them when they played while the human contestants, of course, did not even have a dictionary.

Some of the observers were dazzled that the computer was able to understand a rhyming word– what animal living in a mountainous region rhymes with “Obama”. They were surprised that the computer had been programmed to “know” that llama rhymes with Obama? You are indeed easily impressed.

Robot Love

More on Robot Love

Am I right? Consider this: would you enjoy watching a TV show in which contestants competed to solve complicated math equations as quickly as possible? Now, would you be excited to see a computer compete against the humans in this contest? I didn’t think so.

Yes, computers can crunch numbers. In fact, in essence, that’s all they do. The natural language used for the questions in Jeopardy are broken down by the computer into bits and bytes and then processed. Very quickly.

From the computer’s point of view, all of the questions are nothing more than math equations to be solved with speed.

It’s a Binary World
Now this one really bugs me: “KG Blankinship” writes in a letter to the New York Times that “of course we can build machines that exhibit purely random behavior by exploiting quantum mechanics as well”.

But before that he says something even more absurd: “Self-awareness and the ability to adapt creatively can also be programmed into a computer”. The statement is self-contradictory but he hits on a truth: “can be programmed” into a computer. Next, he’ll tell us that a computer can program itself. As if the program that told it to program itself could ever be something that was not, no matter how many steps down the chain, the product of human intervention.

Can a computer’s behavior ever be truly “random”? Or is the appearance of randomness merely the irreducible fact that the human’s have hidden the schedule for the behavior from humans by employing elaborate and obtuse mathematical formulae? Yes, always. And it’s always ultimately math. And the computer is always ultimately binary, which means it can never not be math. And if someone jumps up and shouts “yeah, but sooner or later they will find a way to integrate organic cells…” I say that on that day the organic cells will be self-aware or random, not the computer.


Why does it matter? Because sooner or later someone is going to tell someone else that something is true or must be done and can’t be contradicted because a computer said it was true or must be done. No, the programmer said it was true or must be done. The computer is only doing what it can only do: parrot the input of it’s master.

It occurs to me that some of the people defending the idea that computers can “think” like humans operate under the assumption that the human brain is binary in function, that is, that neurons are all either on or off, with no meaningful in-between state. (I suppose you could also argue that a very, very large number of computer chips could attain a level of virtual analog operation, where there are so many simulated “in-between” states that is operates like a human brain.)

It’s an intriguing line of thought. I don’t believe the human brain is binary in that sense. I believe that human beings are an integrated system in which any particular state of virtually any part of the body has an infinite range of values, which, combined with every other part of the body having an infinite range of values, produces an organism that can never be matched by any device that is, by definition, at its fundamental level, always binary.

To believe that human brains are also binary is to impose a reductionist view of biology onto an organism.  You can only believe it if you choose to see only the binary functions of the organism, and ignoring the organic non-binary aspects of the brain.

Robots Can’t Love

I enjoyed “Wall-E”, because the graphics were nice, and the action was wittily contrived. Wall-E meets and falls in love with a more up-to-date computer that looks like an inflated iPod. The two coo for each other.

But why do people so readily want to believe that robots might some day be capable of having feelings?

This is an immutable and irrevocable fact about robots: robots are programs– there is not a single thing they will ever do that is not the result of a programming instruction placed there by a human technician. The “feelings” expressed by a robot will always and ever be as real to anyone as the cuckoo in a cuckoo clock, or those dolls that used to have a string in the back, and will probably be twice as annoying after the very, very brief phase of novelty wears off.

Well, there are movies about talking dogs and flying men and 12-year-olds who know Unix (Jurassic Park), so what’s the problem? The problem is, I get why we might have a compelling movie about a talking dog, or a smart 12-year-old, or a man with superpowers: all of them correspond to real beings who have real feelings, and there are explanations for the dog, the 12-year-old, and superman. There is no explanation than can possibly explain why a robot would have human feelings, just as there is no possible explanation of why a bullet might fly at 10 miles per hour, or there would be a parking spot available right in front of that downtown office building our hero needs to enter immediately.

The problem is, I just don’t find a story line about a robot with feelings compelling. It’s just not interesting. It’s impossible to care about the robot with feelings because I can’t escape the awareness of the fact that every action the robot takes in response to his “feelings” is, in fact, the result of a program created by his manufacturer.

Ironically, the most interesting idea about a robot with “feelings” is this: what if the humans in the story didn’t know it was a robot?


What about “Blade Runner”?

All right– this is an interesting movie. But the “replicants” are clearly not robots– they’re genetically engineered organisms. Or are they? The movie doesn’t explain. They bleed and they die and they have feelings. Does that answer the question? Yes it does– they are organisms, genetically engineered to function like humans, so they can work and live where humans would find conditions intolerable.

But… in one scene, Deckard encounters a maker of the eyes which are clearly manufactured, aren’t they? In fact, they could just as likely have been cultured in some way, grown from stem cells, or what have you.

The most beautiful moment in the movie comes when a replicant does something absolutely human– gets nostalgic:

I’ve seen things you people wouldn’t believe… attack ships on fire off the shoulder of Orion. I watched c-beams glitter in the dark near Tanhauser Gate.

…All those moments will be lost in time, like tears in rain. Time to die.


Other concepts that a Hollywood Producer found credible:

  • perky brain surgeons who look and talk like Meg Ryan
  • parking spaces in front of the building you suddenly need to enter very quickly in order to save someone’s life
  • soldiers as heroes who never seem to actually kill anybody
  • mothers who have all the time in the world to send their children off to school with hugs and kisses and expressions of consuming devotion– as if they knew something bad was going to happen
  • annoying mentally disturbed men who seem strangely attractive to young, beautiful women
  • rogue police who “break all the rules”
  • suspects who immediately tell the truth when threatened by the rogue cop who breaks all the rules.

Artificial Stupidity: Software That Weeps

I have never liked Stephen Spielberg even when he thinks he’s being oh so serious and profound, as in “The Color Purple” and “Schindler’s List” and “Saving Private Ryan”.  I think he is a brilliant technical director, but he always feels that he has to slug you in the face with the emotional crux of his drama so you don’t miss it.   Spielberg, as is less well known, is also a shameless plagiarist.  He steals from other films, ones that are usually not well known (see the tank scene in “Saving Private Ryan” compared to Bernard Wicki’s “The Bridge”).

And he often employs the worst film music composer in history in John Williams.

I don’t think I have ever heard a piece by John Williams that I found moving in the slightest respect.  Yes, he is universally acclaimed.  He wins Oscars.   I don’t care.  On my side: he did “Star Wars”.  If you really think he’s that great— he did “Star Wars”.

I have always liked Stanley Kubrick who, in my opinion, created the greatest movie ever made in “Dr. Strangelove or How I Learned to Stop Worrying and Love the Bomb”.

So it was with stunned disbelief that I learned that Spielberg was the designated heir of Kubrick’s last film project, “AI”, about a boy created with artificial intelligence who wants to become a real boy. Pinocchio with silicon.

I am baffled by some of the early reviews of the film. The New York Times and Salon both made it sound like this was a really interesting film that might have failed on one or two points but, ultimately, represented an advance on Spielberg’s career. Well, Salon was a bit ambivalent and thought Spielberg was a true genius– when he stuck to entertainments like “Jaws” and “ET”.

Anyway, I found “AI” a big disappointment. The last hour– which seemed interminable– is Spielberg at his worst, wringing mawkish, overwrought tears from the virtual viewer with “heartrending” scenes of loss and grief.

But the real problem with this movie is the same problem countless sci-fi films have faced in the past: how to make a robot interesting. If a robot is nothing more than the sum of it’s programming and hardware, then how can it display the big emotions Hollywood regards as essential to the blockbuster film? How can software weep?

This is Spock, remember. Spock, in the original Star Trek, was supposed to something of a logic machine. He represented Reason, the ability of man to analyze and judge without the corrupting influence of emotions. But Star Trek couldn’t bear to leave Spock alone. When the captain was imperiled, the emotionless Spock would take absurd chances with the lives of the entire crew in order to save the one man he… loved?

It’s like claiming that the girl who seduced you in high school was the only virgin in your class.

If the original Star Trek had had any guts, Spock would have said, “tough luck” and instructed Scotty to plot a course to a sector of space not inhabited by gigantic amoeba’s or deadly Klingons. “It would not be rational to endanger the lives of 500 crew members in order to embark upon the marginal prospect of saving the captain’s life when I have calculated the odds against his survival to be 58,347 to 1. Furthermore, the odds of finding a replacement captain of equal or superior merit among current members of the crew are approximately 2 to 1…”

So we’re back to a robot, in AI, a little boy who replaces a seriously ill little boy in the lives of a young couple. When the real boy gets better and returns home, the mother drops the robot off in a woods somewhere and then drives off. Heart-wringing tear-jerking scene number one, and it’s milked for all it’s worth in classic Spielberg style.

We, the viewer, are supposed to feel something that the flesh-and-blood mother in the film–who cared so much about a child that she adopted an artificial one–does not.  But this is bizarre– the primary signal here, of what we should feel about this abandoned child– would normally come from the parents or siblings or friends of the child.  If they feel nothing, why should we?  Why would the mother demand a replacement for her seriously ill boy if she was going to care so little for it that she would drop it off in the woods?

The twist here is that the mother is right to feel nothing for the little robot.  He is a robot!   The deceit foisted on the viewer is that anyone would think she would feel anything for the robot in the first place.

The robot boy sets out to find the good fairy– I’m not kidding– who will turn him into a real boy. He has some adventures during which Spielberg, as is his habit, shamelessly pillages the archives for great shots, including the famous Statue of Liberty shot from “Planet of the Apes” (the first one), and various scenes from “Blade Runner”, “Mad Max”, and, well, you name it. Originality has never been Spielberg’s strong suit.

The truth is that no robot will ever have a genuine aspiration to be anything. What you are talking to, my friends, is a piece of machinery. And it is logically impossible for a machine to behave in any way other than the way it is programmed to respond, no matter how complex or advanced the programming is.

The only way around this conundrum is to imagine the possibility of incorporating organic elements into the robotic brain, something I’m sure Spielberg believes is possible. But then it’s not a robot. It’s an organism, and it may well be heartwarming to some of us, in the same way that “My Friend Flicka” and “Lassie Come Home” are heartwarming. [2011-04]

So when a robot says it wants to be human, what you really have is a human telling a machine to say it wants to be me. Is there any concept in Science Fiction so wrought with Narcissism?  So shallow and pointless?

The problem with that idea is that you would have to believe that humans would someday create sophisticated, powerful machinery that would behave in unpredictable– and uncontrollable– ways. You would also have to believe that humans would feel emotional attachments to these devices the way they attach to pets and social workers in real life.

Anyway, it’s hard to care about what happens to the boy when the premise of the film is fundamentally absurd, and Spielberg is entirely concerned with dazzling visual effects and contrived set pieces. The film opens, for example, with one of the lamest q & a sessions ever imagined, between a brilliant scientist (William Hurt) and a group of docile graduate students who lob softballs at Hurt (and the audience) in order to convey information that isn’t required by the audience anyway.

It is impossible to imagine Kubrick working with this kind of slop and incoherence. “AI” is 100% Spielberg.