What’s the best way that Artificial Intelligence could take over the human world? By whispering in our ear and encouraging us to be stupid.
A new experiment has demonstrated that robots “can encourage people to take greater risks.”
The experiment was a simulated gambling scenario: participants who gambled in the company of a trouble-making robot, pushed their betting to a point where the game literally blew up in their faces.
Participants who played alone, and had nothing to influence their behaviours, were “significantly” less likely to take similar risks.
The researchers, from the UK’s University of Southhampton, say that “increasing our understanding of whether robots can affect risk-taking could have clear ethical, practical and policy implications.”
Dr Yaniv Hanoch, Associate Professor in Risk Management at the University of Southampton, led the study. In a prepared statement he said:
“We know that peer pressure can lead to higher risk-taking behaviour. With the ever-increasing scale of interaction between humans and technology, both online and physically, it is crucial that we understand more about whether machines can have a similar impact.”
How the experiment worked
It was a bit like an old-fashioned Bugs Bunny cartoon.
The researchers recruited 180 undergraduate students to undergo the Balloon Analogue Risk Task (BART) – a computer assessment “that asks participants to press the space bar on a keyboard to inflate a balloon displayed on the screen.”
With each press of the space bar, the balloon inflates slightly, and the player is rewarded with a penny that’s added to a money bank.
The balloons explode randomly, causing the player to lose all the winnings they’d accrued for that particular balloon.
At any point the player has the option to cash in the money before the balloon explodes, then move on to the next balloon.
The participants were divided into three groups:
- One group took the test in a room on their own. This was the control group.
- A second group took the test alongside a robot that only provided them with the instructions but was silent the rest of the time.
- The final, experimental group, took the test with a trouble-making robot that provided instructions, but also spoke to the player with encouraging statements such as “why did you stop pumping?”
The results showed that the group who were encouraged by the robot “took more risks, blowing up their balloons significantly more frequently than those in the other groups did.”
Losers right? Well, no. With the robot egging them on, they earned more money overall.
There was no significant difference in the behaviours of the students accompanied by the silent robot and those with no robot.
And the scientists conclude?
Dr Hanoch said: “We saw participants in the control condition scale back their risk-taking behaviour following a balloon explosion, whereas those in the experimental condition continued to take as much risk as before.
“So, receiving direct encouragement from a risk-promoting robot seemed to override participants’ direct experiences and instincts.”
The researchers now believe that further studies are needed to see whether similar results would emerge from human interaction with other artificial intelligence (AI) systems, such as digital assistants or on-screen avatars.
Dr Hanoch noted: “With the widespread of AI technology and its interactions with humans, this is an area that needs urgent attention from the research community.”
On the one hand, the results “might raise alarms” about the prospect of robots causing harm by increasing risky behavior.
But then, “our data points to the possibility of using robots and AI in preventive programs, such as anti-smoking campaigns in schools, and with hard to reach populations, such as addicts.”
Dr Hanoch said the results pointed to both possible benefits and perils that robots might pose to human decision-making.
Although increasing risk-taking behavior in some cases has obvious advantages, “it could also have detrimental consequences that are only now starting to emerge.”
Young children even more vulnerable to manipulation
A 2018 study found that young children are significantly more likely than adults to have their opinions and decisions influenced by robots.
The study, conducted at the University of Plymouth, compared how adults and children respond to an identical task when in the presence of both their peers and humanoid robots.
It showed that while adults regularly have their opinions influenced by peers, something also demonstrated in previous studies, they are largely able to resist being persuaded by robots.
However, children aged between seven and nine were more likely to give the same responses as the robots, even if they were obviously incorrect.