The robot can work around the clock, frequently with a different object in each of its grippers. Tellex and her graduate student John Oberlin have gathered—and are now sharing—data on roughly 200 items, starting with such things as a child’s shoe, a plastic boat, a rubber duck, a garlic press and other cookware, and a sippy cup that originally belonged to her three-year-old son. Other scientists can contribute their robots’ own data, and Tellex hopes that together they will build up a library of information on how robots should handle a million different items. Eventually, robots confronting a crowded shelf will be able to “identify the pen in front of them and pick it up,” Tellex says.
Projects like this are possible because many research robots use the same standard framework for programming, known as ROS. Once one machine learns a given task, it can pass the data on to others—and those machines can upload feedback that will in turn refine the instructions given to subsequent machines. Tellex says the data about how to recognize and grasp any given object can be compressed to just five to 10 megabytes, about the size of a song in your music library.
Tellex was an early partner in a project called RoboBrain, which demonstrated how one robot could learn from another’s experience. Her collaborator Ashutosh Saxena, then at Cornell, taught his PR2 robot to lift small cups and position them on a table. Then, at Brown, Tellex downloaded that information from the cloud and used it to train her Baxter, which is physically different, to perform the same task in a different environment.
Such progress might seem incremental now, but in the next five to 10 years, we can expect to see “an explosion in the ability of robots,” says Saxena, now CEO of a startup called Brain of Things. As more researchers contribute to and refine cloud-based knowledge, he says, “robots should have access to all the information they need, at their fingertips.”
No comments:
Post a Comment