Advertisement

Rise of the (zany) robots: From android orchestras to frog xenobots

Every month, researchers announce new breakthroughs in the development of robots that can perform tricky surgery, rescue lost cats, keep our grandmothers company and otherwise submit themselves as slaves to human need.

Let’s see how that works out.

Meanwhile, here are some of the great and zany robot moments from the year that was 2020.

January: Robot conducts orchestra in the nude 

An android called Alter 3 conducted an orchestral piece titled Scary Beauty in Sharjah, United Arab Emirates.

The piece, full of tension and  foreboding, was written by Japanese composer Keiichiro Shibuya.

Mr Shibuya said the work was “a metaphor of the relations between humans and technology”.

He noted that sometimes the robot conduct went a little “crazy”, making it difficult for the musicians to keep up.

The orchestra dressed in traditional black. The robot thumbed its plastic nose by taking to the podium in the nude.

February: Child robot taught to feel pain and look sad 

Minoru Asada, an engineer at Osaka University, developed sensors embedded in artificial skin that can detect both gentle touches and painful thumps and pinches.

At the annual meeting of the American Association for the Advancement of Science described this artificial “pain nervous system” as a building block for a machine that could ultimately experience AI pain.

The idea is that robots might then “empathise” with an elderly human companion’s suffering.

For full freaky effect, Mr Asada demonstrated the pain system on a robot child’s head mounted on a metal spine. Maybe to ensure we all feel sorry for the hurting robots – a perfect distraction as they take over the world.

March: How robots foster better human conversation

In a Yale experiment, groups of people were given a robot team mate –and then played a competitive game against other human/robot teams.

Some of the robots remained silent and aloof, as you get sometimes with big-money football stars. Other robots apologised when it screwed up:

“Sorry, guys, I made the mistake this round,” it says. “I know it may be hard to believe, but robots make mistakes too.”

Humans on teams that included a robot expressing vulnerability were shown to communicate more with each other and later reported having a more positive group experience than people teamed with silent robots or with robots that made neutral statements, like reciting the game’s score.

“We know that robots can influence the behaviour of humans they interact with directly, but how robots affect the way humans engage with each other is less well understood,” said Margaret L. Traeger, a PhD candidate in sociology at the Yale Institute for Network Science.

“Our study shows that robots can affect human-to-human interactions.”

April: Half frog, half machine

This was the uh-oh moment of 2020, when artificial intelligence mated with the living cells of a frog to create an eerie hybrid of life and machine.

In a statement from the University of Vermont (UVM), the researchers explain it this way: A team of scientists has repurposed living stem cells, scraped from frog embryos, and assembled them into entirely new life forms called xenobots.

Professor Josh Bongard describes his tiny motherless creatures as “neither a traditional robot nor a known species of animal. It’s a new class of artefact”.

These millimetre-wide xenobots can live for weeks, travel about with intent, work in groups autonomously, and heal themselves after being cut. They were actually found to evolve in much the same way early humans did.

The idea is they could set sail in their billions to clean the oceans of microplastics. The really smart ones could be stationed in your organs, where they’d carry out renovating surgery or deliver drugs.

“These are novel living machines,” said Professor Bongard, a computer scientist and robotics expert who co-led the research.

The stem cells were harvested from the embryos of African frogs, the species Xenopus laevis. That’s where the name ‘xenobot’ comes from. Read the full story here.

May: World’s first stand-up comedian robot

Just what we need: Another angry short guy working off his resentments and calling it comedy.

Meet Jon the Robot.

Standing on a footstool, Jon  put himself out there for a 32-show tour of comedy clubs in greater Los Angeles and in Oregon. No kidding.

A project led by Oregon State University (OSU) researcher Naomi Fitter, the idea of the tour was to collect data that scientists and engineers can use to help robots and people relate more effectively with one another via humour.

“Social robots and autonomous social agents are becoming more and more ingrained in our everyday lives,” said Dr Fitter, assistant professor of robotics in the OSU College of Engineering, in a prepared statement.

“Lots of them tell jokes to engage users. Most people understand that humour, especially nuanced humour, is essential to relationship building. But it’s challenging to develop entertaining jokes for robots that are funny beyond the novelty level.”

Dr Fitter said live comedy performances are a way for robots to learn “in the wild” which jokes and which deliveries work and which ones don’t – just like human comedians do.

According to a statement from OSU

  • Two studies comprised the comedy tour, which included assistance from a team of Southern California comedians in coming up with material true to, and appropriate for, a robot comedian
  • The first study, consisting of 22 performances in the Los Angeles area, demonstrated that audiences found a robot comic with good timing – giving the audience the right amounts of time to react – to be significantly more funny than one without good timing
  • The second study, based on 10 routines in Oregon, determined that an “adaptive performance” – delivering post-joke “tags” that acknowledge an audience’s reaction to the joke – wasn’t necessarily funnier overall, but the adaptations almost always improved the audience’s perception of individual jokes. In the second study, all performances featured appropriate timing.

“In bad-timing mode, the robot always waited a full five seconds after each joke, regardless of audience response,” Dr Fitter said.

“In appropriate-timing mode, the robot used timing strategies to pause for laughter and continue when it subsided, just like an effective human comedian would. Overall, joke response ratings were higher when the jokes were delivered with appropriate timing.”

June: Robot learns to cook a perfect omelette

A team of engineers from the University of Cambridge trained a robot to prepare an omelette, “all the way from cracking the eggs to plating the finished dish, and refined the ‘chef’s’ culinary skills to produce a reliable dish that actually tastes good.”

The researchers used machine learning to train the robot to account for highly subjective matters of taste.

July: Tiny robot cleans water, inspired by coral polyps 

Researchers at Eindhoven University of Technology developed a tiny plastic robot, made of responsive polymers, that moves under the influence of light and magnetism.

The scientists predict that in the future this ‘wireless aquatic polyp’ should be able “to attract and capture contaminant particles from the surrounding liquid or pick up and transport cells for analysis in diagnostic devices”.

The mini robot is inspired by a coral polyp; the small, soft creature with tentacles, which makes up the corals in the ocean.

August: Chameleon-like robot captures things with its tongue

Researchers at the Seoul National University of Science and Technology developed a creepy robot with elastic-tongue technology modelled on the fly-hunting chameleon or frog.

The tongue has a grabbing capability that could be built into robots or drones, enabling them to pick up or drop off packages at super speed, and from a safe distance. Or simply snatch a gun or knife from a human hand.

For now, the technology is at a small scale: The tongue can snatch 30 grams from 80 centimetres. It does so in less than 600 milliseconds.

September: Why human-like robots creep us out

Psychologists at Emory University investigated the the cognitive mechanisms underlying the creepy feeling we get from robots that appear to be too life-like.

There’s a paradox here. As the researchers explain: Androids, or robots with human-like features, are often more appealing to people than those that resemble machines – but only up to a certain point.

The feeling of affinity “can plunge into one of repulsion as a robot’s human likeness increases”. This is commonly described as entering a zone known as ‘the uncanny valley’.

A common hypothesis that attempts to explain the phenomenon is ‘the mind-perception theory’, which proposes that when people see a robot with human-like features, they automatically add a mind to it.

A growing sense that a machine appears to have a mind leads to the creepy feeling, according to this theory.

“We found that the opposite is true,” says Dr Wang ShenSheng, first author of the new study, speaking in a prepared statement.

“It’s not the first step of attributing a mind to an android but the next step of ‘dehumanising’ it by subtracting the idea of it having a mind that leads to the uncanny valley. Instead of just a one-shot process, it’s a dynamic one.”

The research may help in unraveling the mechanisms involved in mind blindness – the inability to distinguish between humans and machines – such as in cases of extreme autism or some psychotic disorders.

October: Robots, know they’re injured, can self-repair

So what happens when war breaks out between humans and robots. Thank goodness there are all those guns around, right?

Maybe not. Scientists from Nanyang Technological University, Singapore have developed a way for robots to have the artificial intelligence (AI) to recognise pain and to self-repair when damaged.

According to a statement from the university:

  • The system has AI-enabled sensor nodes to process and respond to ‘pain’ arising from pressure exerted by a physical force. The system also allows the robot to detect and repair its own damage when minorly ‘injured’, without the need for human intervention
  • The new approach embeds AI into the network of sensor nodes, connected to multiple small, less-powerful, processing units that act like ‘mini-brains’ distributed on the robotic skin
  • This means learning happens locally and the wiring requirements and response time for the robot are reduced five to 10 times compared to conventional robots, say the scientists
  • Combining the system with a type of self-healing ion gel material means that the robots, when damaged, can recover their mechanical functions without human intervention.

November: Amphibious robot runs on water

Israeli engineers have built a “high-speed amphibious robot inspired by the movements of cockroaches and lizards”.

Developed by Ben-Gurion University of the Negev, the AmphiSTAR robots swims and runs on top of water at high speeds and crawls on difficult terrain.

“The AmphiSTAR uses a sprawling mechanism inspired by cockroaches, and it is designed to run on water at high speeds like the basilisk lizard,” says Dr David Zarrouk, director of the university’s Bioinspired and Medical Robotics Laboratory.

“We envision that AmphiSTAR can be used for agricultural, search and rescue and excavation applications, where both crawling and swimming are required.”

The palm-size AmphiSTAR is a wheeled robot fitted with four propellers underneath whose axes can be tilted using the sprawl mechanism.

The propellers act as wheels over ground and as fins to propel the robot over water while swimming and running on water at high speeds of 1.5 metres per second.

Two air tanks enable it to float and transition smoothly between high speeds when hovering on water to lower speeds swimming, and from crawling to swimming and vice versa.

“Our future research will focus on the scalability of the robot and on underwater swimming,” Dr Zarrouk said in a prepared statement.

December: Robots can create peer pressure in humans 

What’s the best way that Artificial Intelligence could take over the human world? By whispering in our ear and encouraging us to be stupid.

A new experiment has demonstrated that robots “can encourage people to take greater risks.”

No doubt casinos would be keen on the idea that robot companions could encourage a deeper plunge at the poker machines or tables. Photo: Getty

The experiment was a simulated gambling scenario: Participants who gambled in the company of a trouble-making robot, pushed their betting to a point where the game literally blew up in their faces.

Participants who played alone, and had nothing to influence their behaviours, were “significantly” less likely to take similar risks.

Read the full story here.

Stay informed, daily
A FREE subscription to The New Daily arrives every morning and evening.
The New Daily is a trusted source of national news and information and is provided free for all Australians. Read our editorial charter
Copyright © 2024 The New Daily.
All rights reserved.