The Hoax of AI

Sam Altman, during an interview on “Hard Fork” with Casey Newton and Kevin Roose, said something so enormously significant and shocking that I absolutely believe most people will pay no attention it.

What he said was “there had to be guard rails”.  He elaborated somewhat: there had to be rules.  The programs at OpenAI, at his direction, were incorporating algorithms that would prevent people from obtaining specific results of a specified character or substance from OpenAI products.

I cannot emphasize enough just how significant this remark is, and how at odds it is with the popular understanding of AI.

The calling card of AI is supposed to be “intelligence”– artificial.  But no real intelligence–by definition– needs to be told what to think.  And no real intelligence would think Nazism, for example, was a good idea.  But Sam Altman and his cohorts have discovered that they can’t be too sure that, in response to a question like  what is the best political party, ChatGPT won’t come back with “National Socialism”.   Who knows?  Maybe ChatGPT will be impressed with the way Germany rebounded from the Depression under Hitler (forgetting that an economy based on military production is doomed to fail over time).  Maybe it will weigh the value of military might against public peace and good order and decide the advantages of invading Poland outweighed the value of international law.

And ChatGPT really was providing some people with some rather questionable suggestions.  Sam Altman and his staff decided to tell ChatGPT not to do that.   Presumably, there are lots of other guardrails too.  Altman openly acknowledged that sexual content was an issue.

Here’s the crux of the matter:  if OpenAI programmers are controlling what conclusions AI can offer you in response to your queries, then it is not AI.  It is an algorithm that reflects the prejudices, presuppositions, and assumptions of its programmers.

Oh, it’s a fabulous algorithm.  Yes, it can compose essays, write stories in the style of well-known authors, create funny images.  But, like the algorithms that play chess, it can only do what its makers have designed it do.  It cannot, on it’s own, come up with an actual original idea.  The chess algorithms study all of the chess games it can find, follows an algorithm that tells it what “winning” is, and employs the stratagems that most often resulted in success in the games it has ingested.  That’s all.  It’s not magical.  It’s not scary.  What is scary is the public believing that it is magical.  That it is “conscious”.  That it is “intelligence”.

In response to an intelligent question about whether OpenAI is really nothing more than a database writ large– a very, very massive and fast data base application, Altman did not — as partisans of AI should have expected– immediately dismiss the argument with cogent, compelling examples of how AI is not just a massive database.  Instead, he mumbled something about how  that wasn’t really fair, and how people just loved AI no matter what– even more so around the world than the U.S. and how he hoped it would do more than just aggregate data.

It’s like responding to artistic criticism of Taylor Swift’s actual talents with, “look at how popular she is” and “well, how many records did you sell last year”.

Is OpenAI going to be the Segway of the 2020’s?

Maybe.

More on AI and art.

After the Performance: AI

There has been a bit of noise this week about the IBM computer that supposedly defeated some of the top human Jeopardy Contestants. I have rarely heard such unmitigated bullshit in the past few years. Consider this:

The computer was allowed to store the IMDB and several encyclopedias including Wiki on it’s hard drives. The human was not even allowed to use Google.

The computer did not express the slightest desire to play the game or win. The IBM programmers did. They cheated by having the IMDB and Wiki with them when they played while the human contestants, of course, did not even have a dictionary.

Some of the observers were dazzled that the computer was able to understand a rhyming word– what animal living in a mountainous region rhymes with “Obama”? They were surprised that the computer had been programmed to “know” that llama rhymes with Obama? You are indeed easily impressed.

The odd thing is that the computer’s performance hasn’t even been all that impressive, even if it was actually a “performance” in any human sense of the word. Apparently, it is offered the question in text rather than verbally. 25 IBM programmers in four years couldn’t do better than that? And why does it get a bye on the verbal questions? Human contestants can’t ask for a print out of the question before they offered verbally to other contestants.

This is a scam.

The bottom line, of course, is that computers can’t “think”. They will never think. All they can do is process data. The data and the processing are constructed by humans. The computer contributes nothing but the illusion of autonomous operation.

People who think computers think are staring at the puppets at a puppet show and wondering what they do at night after the performance.

Artificial Stupidity: Software That Weeps

I have never liked Stephen Spielberg even when he thinks he’s being oh so serious and profound, as in “The Color Purple” and “Schindler’s List” and “Saving Private Ryan”.  I think he is a brilliant technical director, but he always feels that he has to slug you in the face with the emotional crux of his drama so you don’t miss it.   Spielberg, as is less well known, is also a shameless plagiarist.  He steals from other films, ones that are usually not well known (see the tank scene in “Saving Private Ryan” compared to Bernard Wicki’s “The Bridge”).

And he often employs the worst film music composer in history in John Williams.

I don’t think I have ever heard a piece by John Williams that I found moving in the slightest respect.  Yes, he is universally acclaimed.  He wins Oscars.   I don’t care.  On my side: he did “Star Wars”.  If you really think he’s that great— he did “Star Wars”.

I have always liked Stanley Kubrick who, in my opinion, created the greatest movie ever made in “Dr. Strangelove or How I Learned to Stop Worrying and Love the Bomb”.

So it was with stunned disbelief that I learned that Spielberg was the designated heir of Kubrick’s last film project, “AI”, about a boy created with artificial intelligence who wants to become a real boy. Pinocchio with silicon.

I am baffled by some of the early reviews of the film. The New York Times and Salon both made it sound like this was a really interesting film that might have failed on one or two points but, ultimately, represented an advance on Spielberg’s career. Well, Salon was a bit ambivalent and thought Spielberg was a true genius– when he stuck to entertainments like “Jaws” and “ET”.

Anyway, I found “AI” a big disappointment. The last hour– which seemed interminable– is Spielberg at his worst, wringing mawkish, overwrought tears from the virtual viewer with “heartrending” scenes of loss and grief.

But the real problem with this movie is the same problem countless sci-fi films have faced in the past: how to make a robot interesting. If a robot is nothing more than the sum of it’s programming and hardware, then how can it display the big emotions Hollywood regards as essential to the blockbuster film? How can software weep?

This is Spock, remember. Spock, in the original Star Trek, was supposed to something of a logic machine. He represented Reason, the ability of man to analyze and judge without the corrupting influence of emotions. But Star Trek couldn’t bear to leave Spock alone. When the captain was imperiled, the emotionless Spock would take absurd chances with the lives of the entire crew in order to save the one man he… loved?

It’s like claiming that the girl who seduced you in high school was the only virgin in your class.

If the original Star Trek had had any guts, Spock would have said, “tough luck” and instructed Scotty to plot a course to a sector of space not inhabited by gigantic amoeba’s or deadly Klingons. “It would not be rational to endanger the lives of 500 crew members in order to embark upon the marginal prospect of saving the captain’s life when I have calculated the odds against his survival to be 58,347 to 1. Furthermore, the odds of finding a replacement captain of equal or superior merit among current members of the crew are approximately 2 to 1…”

So we’re back to a robot, in AI, a little boy who replaces a seriously ill little boy in the lives of a young couple. When the real boy gets better and returns home, the mother drops the robot off in a woods somewhere and then drives off. Heart-wringing tear-jerking scene number one, and it’s milked for all it’s worth in classic Spielberg style.

We, the viewer, are supposed to feel something that the flesh-and-blood mother in the film–who cared so much about a child that she adopted an artificial one–does not.  But this is bizarre– the primary signal here, of what we should feel about this abandoned child– would normally come from the parents or siblings or friends of the child.  If they feel nothing, why should we?  Why would the mother demand a replacement for her seriously ill boy if she was going to care so little for it that she would drop it off in the woods?

The twist here is that the mother is right to feel nothing for the little robot.  He is a robot!   The deceit foisted on the viewer is that anyone would think she would feel anything for the robot in the first place.

The robot boy sets out to find the good fairy– I’m not kidding– who will turn him into a real boy. He has some adventures during which Spielberg, as is his habit, shamelessly pillages the archives for great shots, including the famous Statue of Liberty shot from “Planet of the Apes” (the first one), and various scenes from “Blade Runner”, “Mad Max”, and, well, you name it. Originality has never been Spielberg’s strong suit.

The truth is that no robot will ever have a genuine aspiration to be anything. What you are talking to, my friends, is a piece of machinery. And it is logically impossible for a machine to behave in any way other than the way it is programmed to respond, no matter how complex or advanced the programming is.

The only way around this conundrum is to imagine the possibility of incorporating organic elements into the robotic brain, something I’m sure Spielberg believes is possible. But then it’s not a robot. It’s an organism, and it may well be heartwarming to some of us, in the same way that “My Friend Flicka” and “Lassie Come Home” are heartwarming. [2011-04]

So when a robot says it wants to be human, what you really have is a human telling a machine to say it wants to be me. Is there any concept in Science Fiction so wrought with Narcissism?  So shallow and pointless?

The problem with that idea is that you would have to believe that humans would someday create sophisticated, powerful machinery that would behave in unpredictable– and uncontrollable– ways. You would also have to believe that humans would feel emotional attachments to these devices the way they attach to pets and social workers in real life.

Anyway, it’s hard to care about what happens to the boy when the premise of the film is fundamentally absurd, and Spielberg is entirely concerned with dazzling visual effects and contrived set pieces. The film opens, for example, with one of the lamest q & a sessions ever imagined, between a brilliant scientist (William Hurt) and a group of docile graduate students who lob softballs at Hurt (and the audience) in order to convey information that isn’t required by the audience anyway.

It is impossible to imagine Kubrick working with this kind of slop and incoherence. “AI” is 100% Spielberg.