Robotics: Science and Systems XII

Situated Language Understanding with Human-like and Visualization-Based Transparency

Leah Perlmutter, Eric Kernfeld, Maya Cakmak

Abstract:

Communication with robots is challenging, partly due to their differences from humans and the consequent discrepancy in people’s mental model of what robots can see, hear, or understand. Transparency mechanisms aim to mitigate this challenge by providing users with information about the robot’s internal processes. While most research in human-robot interaction aim towards natural transparency using human-like verbal and non-verbal behaviors, our work advocates for the use of visualization-based transparency. In this paper, we first present an end-to-end system that infers task commands that refer to objects or surfaces in everyday human environments, using Bayesian inference to combine scene understanding, pointing detection, and speech recognition. We characterize capabilities of this system through systematic tests with a corpus collected from people (N=5). Then we design human-like and visualization- based transparency mechanisms and evaluate them in a user study (N=20). The study demonstrates the effects of visualizations on the accuracy of people’s mental models, as well as their effectiveness and efficiency in communicating task commands.

Download:

Bibtex:

  
@INPROCEEDINGS{Perlmutter-RSS-16, 
    AUTHOR    = {Leah Perlmutter AND Eric Kernfeld AND Maya Cakmak}, 
    TITLE     = {Situated Language Understanding with Human-like and Visualization-Based Transparency}, 
    BOOKTITLE = {Proceedings of Robotics: Science and Systems}, 
    YEAR      = {2016}, 
    ADDRESS   = {AnnArbor, Michigan}, 
    MONTH     = {June}, 
    DOI       = {10.15607/RSS.2016.XII.040} 
}