Machine vision is nothing new, but giving detail and depth to a robot’s sight in real time can still be challenging. Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have suggested one solution using a sensor technology called GelSight.
GelSight has been eight years in the making, but was recently demonstrated with a robotic arm. From the information gathered during this experiment, the researchers presented two papers at the International Conference on Robotics and Automation in Singapore last week.
The sensor needs to actually touch an object in order to translate its dimensions into information. For this purpose, it has a block of transparent rubber on its front surface. The outside surface of this block is painted with metallic paint that reflects the object in front of it.
The rubber takes the shape of an object when it is pressed against that object. The reflections can also be used for computer vision algorithms to translate the shape. Data is then fed to a neural network that can identify the object.
“I think that the GelSight technology, as well as other high-bandwidth tactile sensors, will make a big impact in robotics,” said Sergey Levine, an assistant professor of electrical engineering and computer science at the University of California at Berkeley. “For humans, our sense of touch is one of the key enabling factors for our amazing manual dexterity…Software is finally catching up with the capabilities of our sensors. Machine learning algorithms inspired by innovations in deep learning and computer vision can process the rich sensory data from sensors such as the GelSight to deduce object properties.”