The key to allowing the hand to do this was a camera. The biomedical engineers at Newcastle University fitted their robotic hand with a 99p camera that automatically takes pictures of what’s in front of the hand. Neural networks, or artificial intelligence, packaged in with the hand recognize certain objects based on shape and size.
The recognition triggers 1 of 4 corresponding hand grips that the AI has learned: palm wrist neutral (think what your hand does when you pick up a cup); palm wrist pronated (such as grasping a TV remote); tripod (thumb and two fingers) and pinch (thumb and index finger).
Everything from seeing to recognizing to responding takes place within milliseconds – 10 times faster than the most advanced limbs presently on the market.