Showing 2 results for Robotics
Robots can learn to recognize objects and patterns fairly well, but to interpret and be able to act on visual input is much more difficult. Researchers at the University of Maryland, funded by DARPA’s Mathematics of Sensing, Exploitation and Execution (MSEE) program, recently developed a system that enabled robots to process visual data from a series of “how to” cooking videos on YouTube. Based on what was shown on a video, robots were able to recognize, grab and manipulate the correct kitchen utensil or object and perform the demonstrated task with high accuracy—without additional human input or programming.
Advances in artificial intelligence (AI) are making virtual and robotic assistants increasingly capable in performing complex tasks. For these “smart” machines to be considered safe and trustworthy collaborators with human partners, however, robots must be able to quickly assess a given situation and apply human social norms. Such norms are intuitively obvious to most people—for example, the result of growing up in a society where subtle or not-so-subtle cues are provided from childhood about how to appropriately behave in a group setting or respond to interpersonal situations. But teaching those rules to robots is a novel challenge.