The Problem With AI Is the Word “Intelligence”
As a Financial Times headline says, “AI in Finance Is Like ‘Moving from Typewriters to Word Processors’” (June 16, 2024). But, I think, not much further, despite all the excitement (see “Ray Kurzweil on How AI Will Transform the Physical World,” The Economist, June 17, 2024). At least, doubts are warranted regarding the “generative” form of AI. (IBM defines generative AI as referring to “deep-learning models that can generate high-quality text, images, and other content based on the data they were trained on.”)
The conversational and grammatical capacities of an AI bot like ChatGPT are impressive. This bot writes better and appears to be a better conversationist than what must be a significant proportion of human beings. I am told that he (or she, except that the thing has no sex and I am anyway using the neutral “he”) efficiently performs tasks of identification and classification of objects and that he does simple coding. It’s a very sophisticated program. But he crucially depends on his humongous database, in which he makes zillions of comparisons with brute electronic force. I have had occasions to check that his analytical and artistic capacities are limited.
Sometimes, they are astonishingly limited. Very recently, I spent a couple of hours with the latest version of DALL-E (the artistic side of ChatGPT) trying to have him understand correctly the following request:
Generate an image of a strong individual (a woman) who walks in the opposite direction of a crowd led by a king.
He just could not understand. I had to elaborate, reformulate, and re-explain many times, like in this modified instruction:
Generate an image of a strong and individualist individual (a woman) who walks in the opposite direction of a nondescript crowd led by a king. The woman is in the foreground and walks proudly from west to east. The crowd led by the king is in the close background and walks from east to west. They are going in opposite directions. The camera is south.
(By “close background,” I meant “near background.” Nobody is perfect.)
DALL-E was able to repeat my directives when I tested him, but he could not see the glaring errors of his visual representations, as if he did not understand. He produced many images where the woman on the one hand, and the king and his followers on the other hand, walked in the same direction. The first image below provides an intriguing example of this basic misunderstanding. When the bot finally drew an image where the woman and the king walked in opposite directions (reproduced as the second image below), the king’s followers had disappeared! A child learning to draw recognizes his errors better when they are explained to him.
I said of DALL-E “as if he could not understand,” and that is indeed the problem: the machine, actually a piece of code and a big database, simply does not understand. What he does is impressive compared to what computer programs could do until now, but this not thinking or understanding–intelligence as we know it. It is very advanced computation. But ChatGPT does not know that he is thinking, which means that he is not thinking and cannot understand. He just repeats patterns that he finds in his database. It looks like analogical thinking but without the thinking. Thinking implies analogies, but analogies don’t imply thinking. It is thus not surprising that DALL-E did not suspect the possible individualist interpretation of my instruction, which I did not spell out: a sovereign individual declined to follow the crowd loyal to the king. A computer program is not an individual and does not understand what it means to be one. As suggested by the featured image of this post (also drawn by DALL-E after much prodding, and reproduced below), AI cannot, and I suspect will never be able to, understand Descartes’s Cogito ergo sum (I think, therefore I am). And this is not because he cannot find Latin in his databases.
Nowhere in his database could DALL-E find a robot with a cactus on his head. The other Dali, Salvator, could have easily imagined that.
Of course, nobody can forecast the future and how AI will develop. Prudence and humility are required. Advances in computation will likely produce what we would now consider miracles. But from what we know about thinking and understanding, we can safely infer that electronic devices, as useful as they are, will likely never be intelligent. What’s missing in “artificial intelligence” is the intelligence.
******************************
(0 COMMENTS)