Going further, the researchers attempted to replicate the performance of humans and baboons with artificial intelligence, using neural network models inspired by basic mathematical ideas about what a neuron does and how neurons are connected. These models—statistical systems powered by high-dimensional vectors, matrices that multiply layers of numbers—successfully matched the performance of baboons but not that of humans; They could not reproduce the regularity effect. However, when the researchers created a souped-up model with symbolic elements — the model was given a list of properties of geometric regularity, such as right angles, parallel lines — it closely replicated human performance.
These results, in turn, pose a challenge for artificial intelligence. “I love the advances in AI,” said Dr. dehaene “It’s very impressive. But I think a deep aspect is missing, and that’s “symbol processing” — that is, the ability to manipulate symbols and abstract concepts like the human brain does. This is the subject of his latest book, How We Learn: Why Brains Learn Better Than Any Machine…for Now.
Yoshua Bengio, a computer scientist at the University of Montreal, agreed that current AI lacks something to do with symbols or abstract thinking. dr Dehaene’s work, he said, presents “evidence that human brains utilize capabilities that we are yet to find in state-of-the-art machine learning.”
That’s especially true, he said, when we combine symbols as we assemble and reassemble pieces of knowledge, which helps us generalize. This gap could explain the limitations of AI – for example a self-driving car – and the inflexibility of the system when confronted with environments or scenarios that differ from the training repertoire. And it’s a clue, said Dr. Bengio on where AI research needs to go.
dr Bengio noted that from the 1950s to the 1980s, symbolic processing strategies dominated “good old fashioned AI”. However, these approaches were motivated less by a desire to replicate the capabilities of the human brain than by logic-based reasoning (e.g. checking the proof of a theorem). Then came statistical AI and the neural network revolution, which began in the 1990s and gained traction in the 2010s. dr Bengio pioneered this deep learning method, which was directly inspired by the human brain’s neural network.
His latest research proposes expanding the capabilities of neural networks by training them to generate or imagine symbols and other representations.
It’s not impossible to think abstractly with neural networks, he said, “we just don’t know how to do it yet.” Bengio, along with Dr. Dehaene (and other neuroscientists) launched a major project to study how human conscious processing powers could inspire and empower next-generation AI. “We don’t know what will work and what will be at the end of the day, our understanding of how brains do it,” said Dr. bengio