Backup Header Below

MIT scientists taught robots how to sabotage each other

To that end, researchers at Massachusetts Institute of Technology created a simulation of two socially aware robots that can now tell if they’re being sabotaged or helped. In a new paper presented at the 2021 Conference on Robot Learning in London this week, a team from MIT demonstrated how they used a mathematical framework to imbue a set of robotic agents with social skills so that they could interact with one another in a human-like way. Then, in a simulated environment, the robots could observe one another, guess what task the other wants to accomplish, then choose to either help or hinder them. In effect, the bots thought like humans.

Research like this might sound a little strange, but studying how different kinds of social situations play out among robots could help scientists improve future human-robot interactions. Additionally, this new model for artificial social skills could also potentially serve as a measurement system for human socialization, which the team at MIT says could help psychologists study autism or analyze the effects of antidepressants.

Many computer scientists believe that giving artificial intelligence systems a sense of social skills will be the final barrier to crack in order to make robots actually useful in our homes, in settings like hospitals or care facilities, and be friendly to us, says Andrei Barbu, a research scientist at MIT and an author on this recent paper. After retooling the AI, they can then bring these tools into the field of cognitive science to “really understand something quantitatively that’s been very elusive,” he says.

“Social interactions are not particularly well-studied within computer science or robotics for a few reasons. It’s hard to study social interactions. It’s not something that we assign a clear number,” says Barbu. “You don’t say ‘this is help number 7’ when you’re interacting with someone.”

This is unlike the usual problems that arise in AI, such as object recognition in images, which are fairly well-defined, he says. Even deciding what kind of interactions two people are having—the easiest level of the problem—can be extremely difficult for a machine.

So, how can scientists build robots that not only do a task, but also understand what it means to do the task? Could you ask a robot to understand the game you’re playing, figure out the rules just by watching, and play the game with you?

To test out what was possible, Barbu and colleagues set up a simple two-dimensional grid that virtual robotic agents could move around in to complete different tasks. The agents on the screen looked like cartoon robot arms, and they were instructed to either move a water bucket to a tree or to a flower.

Source :

    Other Press Releases