Last week, we (and most of the rest of the internet) covered some research from MIT that uses a brain interface to help robots correct themselves when they’re about to make a mistake. This is very cool, very futuristic stuff, but it only works if you wear a very, very silly hat that can classify your brain waves in 10 milliseconds flat.
At Brown University, researchers in Stefanie Tellex’s lab are working on a more social approach to helping robots more accurately interact with humans. By enabling a robot to model its own confusion in an interactive object-fetching task, the robot can ask relevant clarifying questions when necessary to help understand exactly what humans want. No hats required.
Whether you ask a human or a robot to fetch you an object, it’s a simple task to perform if the object is unique in some way, and a more complicated task to perform if it involves several similar objects. Say you’re a mechanic, and you want an assistant to bring you a tool.
for more read Video Commercial Pricing