The FINANCIAL — AI can often process more information than humans, but that doesn’t extend to our ability to reason by analogy. This form of reasoning is considered the greatest strength of human intelligence. While humans can think up solutions to new problems based on relationships with familiar situations, this ability is virtually absent in AI. Claire Stevenson is researching intelligence and analogical reasoning in both AI and children and how the two might learn from each other, University of Amsterdam notes.
The key question behind the research by Claire Stevenson, assistant professor of Psychological Methods, is: ‘How do humans manage to become so smart?’ She analyses the development of intelligence and the creative process, specifically in children and AI. Stevenson’s research combines her knowledge of developmental psychology with her background in mathematical modelling and computer science. ‘I’m basically trying to test human intelligence in AI, and test AI intelligence in children.’
Analogical reasoning in children
Claire Stevenson started her academic career in the field of developmental psychology, where she researched children’s learning potential: ‘so not what they already know, but what they are capable of.’ She examined the development of analogical reasoning in children, i.e. their ability to find solutions to new problems based on relationships with familiar ones. ‘For example, children were asked to complete the sequence: thirst is to drinking as bleeding is to bandage, wound, cutting, water or food? If you want to find the right answer, you need to apply the relationship between thirst and drinking to bleeding, instead of using familiar associations like wound or cutting.’ Analogical reasoning is considered the greatest strength of human intelligence, University of Amsterdam notes.
Can AI reason by analogy?
Later on in her career, Stevenson switched to the Psychological Methods programme group, where she became fascinated by the idea of applying mathematical models to measure creative processes. This tied in nicely with her Bachelor’s degree in Computer Science. ‘The focus of my research is now shifting to cognitive AI and the mimicking of human intelligence. I’m exploring algorithms and the extent to which they can solve analogies – in other words, that thirst is to drinking as bleeding is to bandage. My colleagues and I are trying to answer the question of how much intelligence there really is in Artificial Intelligence.’
AI tends to struggle with generalisations
To answer that question, we first need to divide intelligence into two forms, Stevenson explains:
What you know: acquired knowledge and learned procedures like arithmetic (crystallised intelligence)
Your reasoning and problem-solving skills (fluid intelligence)
‘AI machines and algorithms have an enormous storage capacity – much larger than a human memory – and can retrieve and process information at lightning speed. They can do some amazing things,’ Stevenson enthuses, ‘but this first form of intelligence is actually quite simple compared to the other, which AI is still struggling with.’ AI can only produce solutions through abstract reasoning after extensive training, and then only in the areas in which it has been trained. ‘Studies dating back to the 1980s established that intelligence is all about the ability to generalise, and concluded that AI wasn’t very good at this. Our research shows that these findings have stood the test of time,’ Stevenson concludes.
AI and Bongard problems
Bongard problems are a well-known example of the limitations of AI. Mikhail Bongard was a Russian computer scientist who in the late 1960s designed problems that required people to discover patterns. Each problem consists of two sets of figures, with each set having a common attribute. The challenge is to discover this common attribute and in this way identify the difference between the two sets. ‘Scientists are trying to develop AI that can learn to solve these problems, but its limited reasoning ability seems to be an issue: humans are “winning” this particular battle for the time being,’ Stevenson explains.
What happens when AI learns to generalise?
Stevenson’s research aims to establish a link between the learning potential of AI and that of children. To that end, she plans to compare the way in which both reasoning tasks and Bongard problems are solved (e.g. in the online Oefenweb learning environment). She then hopes to apply this knowledge for the further development of both AI and learning environments for children. ‘Imagine what would happen if AI managed to master analogical reasoning and learned to think more flexibly and creatively. It could combine that ability with its superior general (factual) knowledge and processing abilities to identify relationships between highly diverse and seemingly unrelated subjects. For example, AI could identify parallels between the course of a disease and recovery from it and the fight against climate change, and contribute unexpected knowledge to help us solve complex problems.’
Discussion about this post