Robotics researcher and artificial intelligence expert Rodney Brooks argues that we have vastly overestimated OpenAI’s large language models, on which the successful chatbot ChatGPT is based.
In an interview with IEEE Spectrum, Brooks argues that these tools are a lot dumber than we realize, not to mention that they’re a long way from being able to compete with humans on any given intellectual task.
In short, is AI poised to become the kind of artificial general intelligence (AGI) that could function at an intellectual level similar to that of humans?
Why ChatGPT couldn’t replace a human
“No, because it has no underlying model of the world,” Brooks told the publication.
“It has nothing to do with the world. It’s just a correlation between languages.”
Brooks’ comments serve as a valuable reminder of the current limitations plaguing AI technology and how easy it is to make sense of it, even if it was designed to sound, simply, rather than reason, like humans.
“We see a person doing something, and we know what else they can do, and we can make a judgment call quickly,” he told IEEE Spectrum.
“But our models of generalization from a performance to a skill do not apply to AI systems.”
In other words, current language models are not able to logically infer meaning, despite making it sound like they can – which can easily mislead the user.
“What big language models are good at is saying what a response should sound like, which is different from what a response should be,” Brooks said.
The researcher said he experimented with large language patterns to help him with “arcane coding” – but had serious problems.
“It gives an answer with full confidence, and in a way I believe it,” Brooks told IEEE Spectrum.
“And half the time, it’s completely wrong. And I spend two or three hours using that cue, and then I say, “It didn’t work.”
“Now, that’s not the same thing as intelligence,” he added. “It’s not the same as interacting. It is only to search”.
In short, Brooks thinks future iterations of the technology could go some interesting places — “but not AGI.” And given the risks involved in having an AI system replace the intelligence of a human being, it’s probably better that way.