body-container-line-1

Our Brain Responds Differently to Man-Made Intelligence - Robots - 2019

By Globalnewsarena.com
Technology Our Brain Responds Differently to Man-Made Intelligence - Robots - 2019
SEP 23, 2019 LISTEN

With its latest movie WALL-E, Pixar Studios has hit cinematic gold once again, with the protagonist arguably the greatest thing to ever do for celluloid. Despite being part of the computer-generated metal, it's amazing how WALL-E can be real, emotive and characterful. In fact, the second act of the film introduces the whole smart and solid robot, the personality. Globalnewsarena.com

Whether you buy into Pixar's specific vision of the human future, there's no denying that both robotics and artificial intelligence are advanced. Since Deep Blue defeated Garry Kasparov in chess in 1996, it has been almost inevitable that we will find ourselves interacting with increasingly intelligent robots. And that brings the study of artificial intelligence to the realm of psychologists and computer scientists.

Jianqiao Ge and Shihui Han from Peking University are two psychologists and they are fascinated by the way our brains overpower artificial intelligence. Do we treat it the way we want human intelligence, or is it processed differently? Both used brain-scanning technology to answer this question and found that there were major differences. Watching human intelligence at work triggers a part of the brain that helps us understand other people's perspectives - areas that do not burn when we respond to artificial intelligence.

Ge and Han recruited 28 Chinese students and made them watch a scene where a detective had to solve a logic puzzle. The charger is a human-computer or a silicon and cable computer (with a camera mounted on it). In both cases, their job was the same - they wore coloured hats and had to decide whether they were red or blue. As a guide, they were told how many hats each colour was and how many people/computers were also given hats. They can also see one of these friends and the hat they wear.

This is an interesting task, for both humans and computers in this mini-drama were given the same information and had to make the same logical deduction to get the right answer. The only difference is the tools they use - good brain power and people use when computers depend on the program.

The work of the students, as they witnessed the incident, was to solve the problem if the problem solver was able to change the colour of their hat. When volunteers make excuses to answer, Ge and Han detect their brains using a technique known as functional magnetic resonance imaging (fMRI).

They found that groups of people witnessing greater activity in their precuneus; Other studies have suggested that this part of the brain is involved in understanding other people's perspectives. These findings also reveal a decline in the activity of the medial prefrontal cortex (vMPFC), an area that helps compare new information to our own experiences. Both reactions are consistent with the results of other studies, which show that we understand others' state of mind by mimicking what they think while suppressing our own perspectives without compromising our reasoning.

But neither precuneus nor vMPFC showed any change in the computer-watching group. And the relationship between the two areas is weaker in students who see computers compared to those who see humans.

The difference is not due to the lack of deductive effort; when students are asked to create a problem-solver colour for themselves, the scans show the same activation in the deductive brain of the brain, regardless of whether the student is viewing humans or machines.

Two strategies Globalnewsarena.com has identified

It seems that the technique of putting yourself in someone else's shoes does not apply to artificial intelligence. Because we recognize that robots and computers are controlled by programs, we do not try to simulate their artificial minds - instead, Ge and Han believe that we value them for their actions.

Indeed, when Ge and Han gave students a simple task of just saying what colour the hat could be perceived by the problem solver, those who viewed the computer showed stronger activity in the visual cortex than those who viewed humans. It shows they pay more attention to the details of the scene as the computer camera points. However, their presences are not remembered.

These results can help explain why autistic people seem to interact with computers and play with robots. Autistic people face social problems because they find it hard to put themselves in someone else's shoes. Indeed, their vMPFC failed to replicate the norm, indicating that they could not prevent their own experience from interfering with their deductions about others. But when they interact with robots, they don't have to do that - keep in mind that vMPFC activity never occurs in students watching computers solve problems.

Ge and Han conclude that humans understand artificial intelligence and that other humans use very different mental strategies. But I wonder if their decision applies to all kinds of AI. In this case, the world of artificial intelligence is represented by a computer-associated camera, which does not actually interact with the study participants. Would the decision have been different if the robot in question was more human in design? What will happen to the precuneus and vMPFC of someone playing Robo Sapien or watching WALL-E? One question for next time, maybe.

body-container-line