But press harder and you might discover a second way of sensing touch, through your knuckles and other joints. And it's that sense — a sense of torque, in robotics jargon — that the researchers have replicated in their new system.
The robotic arm has six built-in sensors, each capable of detecting even the tiniest pressure on any part of the device. After precisely measuring the amount and angle of that force, a series of algorithms can map where the person is touching the robot and analyze what exactly they are trying to communicate. For example, if a person uses their finger to draw letters or numbers anywhere on the surface of the robotic arm, the robot can interpret the direction from that movement. Any part of the robot can be used as a virtual button.
This means that every square inch of the robot effectively becomes a touchscreen, but without the cost, fragility or wiring, says Maged Iskandar, a researcher at the German Aerospace Center and lead author of the study.
“Human-robot interaction, where humans are in close contact with robots and giving instructions to them, is still suboptimal because humans need input devices,” Iskandar said. “If we could use the robot itself as a device, the interaction would be much smoother.”
Such systems could offer a cheaper and easier way to not only provide a sense of touch, but also new ways to communicate with robots, which is especially important for large robots such as humanoids, which continue to attract billions of dollars in venture capital investment.