Nov 20, 2023
LLMs differ from human cognition because they are not embodied
Posted by Dan Breeden in categories: computing, materials
Large language models (LLMs) are impressive technological creations but they cannot replace all scientific theories of cognition. A science of cognition must focus on humans as embodied, social animals who are embedded in material, cultural and technological contexts.
There is the technological question of whether computers can be intelligent, and also the scientific question of how it is that humans and other animals are intelligent. Answering either question requires an agreement about what the word ‘intelligence’ means. Here, I will both follow common usage and avoid making it a matter of definition that only adult humans could possibly be intelligent by assuming that to be intelligent is to have the ability to solve complex and cognitively demanding problems. If we understand intelligence this way, the question of whether computers can be intelligent has already been answered. With apologies to Dreyfus and Lanier, it has been clear for years that the answer is an emphatic ‘yes’. The recent advances made by ChatGPT and other large language models (LLMs) are the cherry on top of decades of technological innovation.