Robots may be working amongst you in your office very soon. Maybe that’s why you have time to read this article right now, because some automated process is already happening, doing some aspects of your job for you. Physical robots that assist with everyday tasks will also see the light of day. It’s not a matter of if, it’s a matter of when. The one thing that these robots currently lack is hand dexterity. They cannot grip well and it is hard for them to utilize their hands like a normal human could. Some biomedical robots used for surgery can reach into the human body with an unparalleled dexterity, however, robots with actual hands modeled after human hands are a bit clunkier.
Therefore, a team of computer science and engineering researchers at the University of Washington went to work to attempt to build a robot hand that has dexterity and which can store and learn from the mistakes it makes if it does happen to drop something.
Talking to Design Engineering, Vikash Kumar, a UW doctoral student in computer science and engineering said: “Hand manipulation is one of the hardest problems that roboticists have to solve. A lot of robots today have pretty capable arms but the hand is as simple as a suction cup or maybe a claw or a gripper.”
The lab in charge of studying the human hand and how to design it into a robotic hand model is the University of Washington Movement Control Laboratory. The engineers developed several algorithms that enabled the computer to send complex hand movements to the hand itself. The results are staggering, as shown in the video below.
The team then uses sensors and motion captures so that they can extract data from their research and learn from it in their future endeavours in attempting to improve the dexterity of robot hands modeled after human hands. The robot hand has five fingers and looks to be able to understand how humans grip onto objects.
“Usually, people look at a motion and try to determine what exactly needs to happen — the pinky needs to move that way, so we’ll put some rules in and try it and if something doesn’t work, oh the middle finger moved too much and the pen titled, so we’ll try another rule,” says lead author and lab director Emo Todorov. “It’s almost like making an animated film — it looks real but there was an army of animators tweaking it. What we are using is a universal approach that enables the robot to learn from its own movements and requires no tweaking from us.”
Kumar explains how the robot learns from its actions to ensure that it grips objects in the correct way. He says: “It’s like sitting through lessons, going home and doing your homework to understand things better and then coming back to school a little more intelligent the next day.” The computer uses the motion capture cameras and the algorithms and refines the process based on what the algorithms command it not to do.
University of Washington