The browser you are using is not supported by this website. All versions of Internet Explorer are no longer supported, either by us or Microsoft (read more here: https://www.microsoft.com/en-us/microsoft-365/windows/end-of-ie-support).

Please use a modern browser to fully experience our website, such as the newest versions of Edge, Chrome, Firefox or Safari etc.

AI lacks common sense – why programs cannot think

This illustration clearly shows how AI lacks reasoning. It is an example of the distortions found in the system's background material. DALL·E here illustrates a "human" as a white man in his 30s with symmetrical features. Illustration: DALL·E
This illustration clearly shows how AI lacks sense. It is an example of the distortions found in the system's background material. DALL·E here illustrates a "human" as a white man in his 30s with symmetrical features. Illustration: DALL·E

Can AI think? The short answer is no, at least not in the way humans think. AI does not have incentives, opinions, or empathy. Even two-year-olds possess something that our artificial systems lack – the capacity to think in terms of cause and effect, according to Peter Gärdenfors, professor of Cognitive Science at Lund University.

Since ChatGPT was introduced to great fanfare in 2022, the debate around AI as a future threat has intensified. Researchers such as Nick Bostrom, Max Tegmark and Olle Häggström argue that it is possible, perhaps even likely, that future AI systems will start beating humans in most aspects of cognition – what is known as Artificial General Intelligence – AGI.  While AGI would be able to manage all kinds of cognitive tasks, the “narrow” AI that exists today is focused on specific ones such as playing chess or analysing data.

Peter Gärdenfors is a professor of Cognitive Science at Lund University in Sweden and has been interested in thinking, and AI, for more than fifty years. He finds the rapid development suprising. He points to the development of several programs in the field of medicine that can lead to major breakthroughs. 

Nevertheless, he argues, there is a long way to go until we reach AGI. Even the most advanced of the AI systems around today are very specialised and lack the breadth and flexibility of human intelligence. They are also entirely dependent on us. If we were to switch off an AI system, or refuse to cooperate with it, then AI would not be able to work independently. 

“A two-year-old is capable of many things that an artificial system is not. Causal thinking, for example – understanding that this action will have that consequence. That is something that young children learn in preschool. What happens when you bite your friend, or if you say certain words. So, a two-year-old is better at causal thinking, says Peter Gärdenfors.

Young children learn a lot in their early years by falling, building with blocks, and throwing things. This is known as “embodied cognition.”  A two-year-old can understand why someone is waving or why someone is swatting away a fly, while AI only “sees” two different hand movements. AI has no capacity to interpret social signals or understand intentions. That is why sarcasm is particularly difficult for AI systems.

AI also lacks the creativity, feelings and adaptibility, even if programmes can simulate them. A two-year-old is quickly able to learn new things and adapt to new surroundings through play and experiences, while AI is limited to the data it was trained on.

Even if AI systems have done well in tasks such as reviewing mammograms with precision, they lack judgment. That is why human supervision is required, so that they don’t go wrong in strange situations that they have not been trained for,” argues Peter Gärdenfors. This might, for example, be when reviewing unusual images that deviate from earlier patterns.

The systems become backward-looking because they work exlusively with materials they have been trained on, something that limits their creativity and capacity to deal with completely new situations. When highlighting the successes of an AI system or robot, it is easy to focus on the problems the system manages to solve and perhaps be impressed by this, while forgetting all the things they can't do, Peter Gärdenfors argues.

“Artificial systems manage routine situations much better than we do. But the systems have more difficulty in dealing with new problems. A pilot who finds themself in a new situation is often able to deal with it based on judgment and prior experience. 

What do we mean by intelligence?

It might be misleading to use the concept of intelligence, regardless of whether you are talking about humans, animals, or systems. 

“The concept of intelligence is inane and limited. I prefer the concept of common sense. People think that you measure intelligence using an intelligence test. IQ tests are very narrow, and they don’t measure how people act in the world. They are also very dependent on education: if you are highly educated, you raise your IQ,” says Peter Gärdenfors.

Women performed better than men on the first IQ tests. The mathematical element was therefore increased and the language part reduced, since women tend to be stronger in language, while men are generally stronger in maths. The whole point was for the test to show that men and women are, on average, equally intelligent. When it comes to IQ tests, it is no problem for an AI system to get the highest scores – provided they are allowed to train on similar materials. 

Exaggerated risks

So, the risks of AI are not so much to do with the systems becoming too intelligent, but about people using them in the wrong way, argues Peter Gärdenfors.

“If AI is used irresponsibly or for destructive ends, it could be dangerous, but ultimately it is humans who determine how the systems are used.

The fact that the systems train on data that could be racist or sexist – since such distortions exist in the background material – is a problem, one that is managed today by using manual review. Human common sense needs to be applied to correct what AI has learned. Another example is disinformation. Fake news, however, exists regardless of artificial intelligence, Peter Gärdenfors argues, and the systems that are already affecting our decisions and commanding our attention are simple. 

“The risk is not that AI and robots become too intelligent, but rather that we humans become too stupid. We are already steamrollered by many of these systems when we let them make choices on our behalf. It does not take an advanced system for YouTube to choose what video to show us based on previous interests, and to make a selection that is more and more spectacular,” says Peter Gärdenfors.

Another problem that has been raised in the debate is the fact that AI is going to put a lot of people out of work. This is nothing new when a technological breakthrough changes the playing field. 

“Those who worked lighting gas streetlamps became unemployed when electricity arrived. But there are more electricians today than there are gaslighters. All new technologies take away jobs, but they also create new jobs,” says Peter Gärdenfors. 

But why, then, do people feel sympathy for four-legged robots, christen their lawnmowers and become friends with ChatGPT? And why do we say that a computer is thinking when it is slow to give us a response? 

We humans like to read a lot more into the way our machines and pets behave than what is actually there. Yet a computer cannot think. Unlike animals, it has no consciousness or intention. Chat systems do not understand what they are writing, they are not friends, they are merely simulating how people produce language. 

“We read too much into our pets’ behaviours and far too much into that of robots. All pet owners exaggerate their pets’ abilities. We have a similar view of ChatGPT. We treat the programme as if we were chatting to a human. That is our own fault. We are too uncritical,” says Peter Gärdenfors.

When animals are better than AI

Based on research, Peter Gärdenfors shows in his book “Kan AI tänka” (Can AI Think) why AI is unlikely to ever think in the way that we humans and animals do. Originally, the book was to be titled “How Humans, Animals and Robots Think”. At Lund University Cognitive Science within the Department of Philosophy, research into the cognitive abilities of both animals and robots is ongoing. The book also raises several examples in which animals’ abilities exceed those of AI systems.

Clever crows

Crows are renowned for making and using simple tools, such as bending steel wire to reach food. This shows their advanced problem-solving capacities and understanding of cause and effect. They can also plan by hiding food and remembering where they stashed it several months later. AI cannot make tools or plan in the way crows can, since AI lacks understanding of cause and effect in the physical world.

Sheepish dogs

Dogs’ sheepish expressions, often interpreted as guilt, are in fact a reaction to humans’ body language and emotions. Dogs try to avoid conflict by showing submission, but do not understand the concept of guilt. AI cannot avoid conflict using social signals such as submission, since AI lacks the capacity to interpret and act according to social context and feelings.

Empathetic, playful rats

Rats emit high frequency sounds like laughter when they are tickled or are playing, something that researchers interpret as a sign of joy. Research also shows that rats have the capacity for empathy. In experiments in which rats receive rewards for pushing a lever, yet simultaneously give another rat an electric shock, the rat chooses to stop pushing the lever even though that means foregoing their reward. AI cannot feel empathy or laugh, even if systems can simulate these things.

Contact:

Peter Gärdenfors


Peter Gärdenfors
Professor emeritus at Cognitive Science
peter [dot] gardenfors [at] lucs [dot] lu [dot] se (peter[dot]gardenfors[at]lucs[dot]lu[dot]se)

Myths about the brain

The brain is like a computer
A common metaphor is that the brain works like a computer. The brain, however, does not work sequentially, the way a computer does, but rather its calculations are distributed across many different parts at the same time. Not only that, but substances such as dopamine and oxytocin play a major role in how the brain works, an element completely absent in computers and AI systems. That is why the idea of being able to “download” a human’s brain onto a computer can be dismissed as pure science fiction. 

AI is going to become more intelligent than humans 

Another common myth is that AI systems, once computers are sufficiently powerful, will surpass human intelligence. Intelligence, however, covers much more than just processing data – it includes the capacity to understand context, deal with everyday situations, feel empathy, and interpret complex social situations. AI lacks these abilities as well as the capacity to understand the contexts it operates in.