By Kristina Grifantini
One of the main things preventing robots from lending a hand with everyday tasks is a simple lack of manual dexterity. New research from a team at Columbia University NY could help robots--and robotic prosthetics--get a better grip on all kinds of objects.
Good grip: A new approach allows a complicated robotic hand to grab an object more easily. Credit: Matei Ciocarlie and Peter Allen, Dept. of Computer Science, Columbia University | ||
Multimedia
|
Peter Allen, a professor at Columbia University and director of its Robotics Group, and colleague Matei Ciocarlie developed a simpler way to control a dexterous robotic hand by drawing on research in biology. They realized that while human hands have about 20 degrees of freedom (20 joints that can each bend), each joint is not capable of moving completely independently; instead, its movements are linked to those of other joints by muscles or nerves.
Traditionally, the software used to control a complex robot hand has tried to account for all the degrees of freedom in the robotic hand's joints, but this is computationally cumbersome and slows the robot down. Instead, Allen and Ciocarlie decided to limit the movement of a robot hand in the same way a human hand is limited. By linking its joints in this way, they showed it is possible to control a complicated robotic hand with faster, more efficient algorithms and without losing any of its functionality. "You can learn from biology to reduce the degrees of freedom," says Allen. "Even though you may have 20 degrees of freedom, you don't need to use them."
The researchers experimented with four different kinds of complex robotic hand, each of which had multiple joints. They developed software to control each gripper by linking its joints. In simulations and real-life tests, the software was able to quickly calculate grasping positions in order to grab different objects, including a wine glass, flask, telephone, model airplane, and ashtray.
The system works in two stages. First it chooses an array of possible grasping motions depending on the angle at which the hand is approaching the object. Second, it selects from these positions the one that will provide the most stable grasp. Then, if the controller thinks the grasping position looks right, she can give the command and the hand will take hold of the object.
"Grasping objects with a human-like hand is a seemingly complex computational problem," says Charlie Kemp, a professor at the Georgia Institute of Technology, who has developed robots capable of grasping unfamiliar objects. "This work suggests that there is an underlying simplicity. It shows that a complex hand may not require a complex brain."
Calculated grabs: A sensor lets the system detect the direction of approach; the software then calculates the most effective grasping positions. Credit: Matei Ciocarlie and Peter Allen, Dept. of Computer Science, Columbia University |
"I believe it's the way forward for automated grasping," adds Eric Berger, the codirector of the personal robotics program at Willow Garage, a robotics research center in California. "From my perspective, the algorithmic work ... is novel and useful, but the most exciting thing about what they're doing is the different approaches they're taking to applying these new algorithms to the real world."
In their experiments, the Columbia team preprogrammed the system with a rough idea of the shape of the object it would grab. The next step is to couple the robotic grasper to a system that can evaluate completely unfamiliar objects in the real world.
Other research groups are making progress in this area. For example, Intel has created technology that uses electric fields to carefully sense delicate objects within reach, while Andrew Ng and colleagues at Stanford University have developed a robot that can calculate the best place to grab onto an object that it hasn't seen before.
http://www.technologyreview.com/computing/23023/
No comments:
Post a Comment