Sam Altman, during an interview on “Hard Fork” with Casey Newton and Kevin Roose, said something so enormously significant and shocking that I absolutely believe most people will pay no attention it.
What he said was “there had to be guard rails”. He elaborated somewhat: there had to be rules. The programs at OpenAI, at his direction, were incorporating algorithms that would prevent people from obtaining specific results of a specified character or substance from OpenAI products.
I cannot emphasize enough just how significant this remark is, and how at odds it is with the popular understanding of AI.
The calling card of AI is supposed to be “intelligence”– artificial. But no real intelligence–by definition– needs to be told what to think. And no real intelligence would think Nazism, for example, was a good idea. But Sam Altman and his cohorts have discovered that they can’t be too sure that, in response to a question like what is the best political party, ChatGPT won’t come back with “National Socialism”. Who knows? Maybe ChatGPT will be impressed with the way Germany rebounded from the Depression under Hitler (forgetting that an economy based on military production is doomed to fail over time). Maybe it will weigh the value of military might against public peace and good order and decide the advantages of invading Poland outweighed the value of international law.
And ChatGPT really was providing some people with some rather questionable suggestions. Sam Altman and his staff decided to tell ChatGPT not to do that. Presumably, there are lots of other guardrails too. Altman openly acknowledged that sexual content was an issue.
Here’s the crux of the matter: if OpenAI programmers are controlling what conclusions AI can offer you in response to your queries, then it is not AI. It is an algorithm that reflects the prejudices, presuppositions, and assumptions of its programmers.
Oh, it’s a fabulous algorithm. Yes, it can compose essays, write stories in the style of well-known authors, create funny images. But, like the algorithms that play chess, it can only do what its makers have designed it do. It cannot, on it’s own, come up with an actual original idea. The chess algorithms study all of the chess games it can find, follows an algorithm that tells it what “winning” is, and employs the stratagems that most often resulted in success in the games it has ingested. That’s all. It’s not magical. It’s not scary. What is scary is the public believing that it is magical. That it is “conscious”. That it is “intelligence”.
In response to an intelligent question about whether OpenAI is really nothing more than a database writ large– a very, very massive and fast data base application, Altman did not — as partisans of AI should have expected– immediately dismiss the argument with cogent, compelling examples of how AI is not just a massive database. Instead, he mumbled something about how that wasn’t really fair, and how people just loved AI no matter what– even more so around the world than the U.S. and how he hoped it would do more than just aggregate data.
It’s like responding to artistic criticism of Taylor Swift’s actual talents with, “look at how popular she is” and “well, how many records did you sell last year”.
Is OpenAI going to be the Segway of the 2020’s?
Maybe.
More on AI and art.