Giving Robots a Body: How AI is Learning Proprioception

Category Science

tldr #

Many AI experts believe that for AI to reach its full potential, it needs to be physically embodied with a body. A new machine learning approach developed by the Technical University of Munich can allow robots to learn how their body is configured and understand its layout and positioning in the environment with only sensors on their limbs. The approach has been implemented on various robots and works in real-time, allowing robots to continuously update their internal models as their physical form changes.


content #

Many experts believe more general forms of artificial intelligence will be impossible without giving AI a body in the real world. A new approach that allows robots to learn how their body is configured could accelerate this process.

The ability to intuitively sense the layout and positioning of our bodies, something known as proprioception, is a powerful capability. Even more impressive is our capacity to update our internal model of how all these parts are working—and how they work together—depending on both internal factors like injury or external ones like a heavy load.

Experts believe AI needs to be physically embodied for it to reach its full potential

Replicating these capabilities in robots will be crucial if they’re to operate safely and effectively in real-world situations. Many AI experts also believe that for AI to achieve its full potential, it needs to be physically embodied rather than simply interacting with the real world through abstract mediums like language. Giving machines a way to learn how their body works is likely a crucial ingredient.

Robots need an internal model of their bodies to interact safely with their environment

Now, a team from the Technical University of Munich has developed a new kind of machine learning approach that allows a wide variety of different robots to infer the layout of their bodies using nothing more than feedback from sensors that track the movement of their limbs. "The embodiment of a robot determines its perceptual and behavioral capabilities," the researchers write in a paper in Science Robotics describing the work. "Robots capable of autonomously and incrementally building an understanding of their morphology can monitor the state of their dynamics, adapt the representation of their body, and react to changes to it." .

The team developed a new type of machine learning approach that allows robots to infer the layout of their body using only sensors on their limbs

All robots require an internal model of their bodies to operate effectively, but typically this is either hard coded or learned using external measuring devices or cameras that monitor their movements. In contrast, the new approach attempts to learn the layout of a robot’s body using only data from inertial measurement units—sensors that detect movement—placed on different parts of the robot.

The team’s approach relies on the fact that there will be overlap in the signals from sensors closer together or on the same parts of the body. This makes it possible to analyze the data from these sensors to work out their positions on the robot’s body and their relationships with each other.

The machine learning approach does not require a massive dataset like deep learning

First, the team gets the robot to generate sensorimotor data via "motor babbling," which involves randomly activating all of the machine’s servos for short periods to generate random movements. They then use a machine learning approach to work out how the sensors are arranged and identify subsets that relate to specific limbs and joints.

The researchers applied their approach to a variety of robots both in simulations and real-world experiments, including a robotic arm, a small humanoid robot, and a six-legged robot. They showed that all the robots could develop an understanding of the location of their joints and which way those joints were facing.

The approach has been implemented to different robots, ranging from robotic arms to humanoid robots and six legged robots

More importantly, the approach does not require a massive dataset like the deep learning methods underpinning most modern AI and can instead be carried out in real-time. That opens it up for use in dynamic environments where robots could continuously adapt and adjust their models as their physical form, or situation, constantly changes.


hashtags #
worddensity #

Share