Press "Enter" to skip to content

Flexible sensors, AI model to aid soft robots in 3D

Leslie 0

In a first step for learning-based approaches to soft robotic control, Massachusetts Institute of Technology (MIT) researchers have enabled a soft robotic arm to understand its configuration in 3D space.

Using Kirigami-based design and fabrication method to create these sensors
Credit: Ryan L. Truby, MIT CSAIL

The researchers validated their system on a soft robotic arm resembling an elephant trunk, which can predict its own position as it autonomously swings around and extends. The findings are in a paper published in the journal ‘IEEE Robotics and Automation Letters , according to a 13 February release.

Challenge of shaping soft sensors

Soft robots are constructed from highly compliant materials, similar to those found in living organisms. These are employed as bio-inspired alternatives to traditional rigid robots. However, it’s very difficult to provide them with autonomous control because they can move in a virtually infinite number of directions at any given moment. That makes it difficult to train planning and control models that drive automation.

Traditional methods to achieve autonomous control use large systems of multiple motion-capture cameras that provide the robots feedback about 3D movement and positions. But those are impractical for soft robots in real-world applications, the MIT researchers say.

How they did it

In this case, the sensors can be fabricated using off-the-shelf materials, meaning any lab can develop their own systems, says Ryan Truby — a postdoc in the MIT Computer Science and Artificial Laboratory (CSAIL) who is co-first author on the paper along with CSAIL postdoc Cosimo Della Santina.

Truby employed sheets of conductive materials used for electromagnetic interference shielding. These materials have “piezoresistive” properties, meaning they change in electrical resistance when strained. As the sensor deforms in response to the trunk’s stretching and compressing, its electrical resistance is converted to a specific output voltage. The voltage is, then, used as a signal correlating to that movement.

The researchers’ robotic trunk comprises three segments, each with four fluidic actuators (12 total) used to move the arm. They fused one sensor over each segment, with each sensor covering and gathering data from one embedded actuator in the soft robot. They used “plasma bonding”, a technique that energizes a surface of a material to make it bond to another material.

To estimate the soft robot’s configuration using only the sensors, the researchers built a deep neural network to do most of the heavy lifting, by cutting through the noise to capture meaningful feedback signals. In experiments, the researchers had the trunk swing around and extend itself in random configurations over approximately an hour and a half. They used the traditional motion-capture system for ground truth data.

In training, the model analyzed data from its sensors to predict a configuration, and compared its predictions to that ground truth data which was being collected simultaneously. In doing so, the model “learns” to map signal patterns from its sensors to real-world configurations.

Way forward

Currently, the neural network and sensor skin are not sensitive to capture subtle motions or dynamic movements. One future aim is to help make artificial limbs that can more dexterously handle and manipulate objects in the environment.

“Think of your own body: You can close your eyes and reconstruct the world based on feedback from your skin,” says co-author Daniela Rus, director of CSAIL and the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science. “We want to design those same capabilities for soft robots.”

“We want to use these soft robotic trunks, for instance, to orient and control themselves automatically, to pick things up and interact with the world. This is a first step toward that type of more sophisticated automated control,” adds Truby.