Robotics: Science and Systems X

Hierarchical Semantic Labeling for Task-Relevant RGB-D Perception

Chenxia Wu, Ian Lenz, Ashutosh Saxena

Abstract:

Semantic labeling of RGB-D scenes is very important in enabling robots to perform mobile manipulation tasks, but different tasks may require entirely different sets of labels. For example, when navigating to an object, we may need only a single label denoting its class, but to manipulate it, we might need to identify individual parts. In this work, we present an algorithm that produces hierarchical labelings of a scene, following is-part-of and is-type-of relationships. Our model is based on a Conditional Random Field that relates pixel-wise and pair-wise observations to labels. We encode hierarchical labeling constraints into the model while keeping inference tractable. Our model thus predicts different specificities in labeling based on its confidence---if it is not sure whether an object is Pepsi or Sprite, it will predict soda rather than making an arbitrary choice. In extensive experiments, both offline on standard datasets as well as in online robotic experiments, we show that our model outperforms other state-of-the-art methods in labeling performance as well as in success rate for robotic tasks.

Download:

Bibtex:

  
@INPROCEEDINGS{Wu-RSS-14, 
    AUTHOR    = {Chenxia Wu AND Ian Lenz AND Ashutosh Saxena}, 
    TITLE     = {Hierarchical Semantic Labeling for Task-Relevant RGB-D Perception}, 
    BOOKTITLE = {Proceedings of Robotics: Science and Systems}, 
    YEAR      = {2014}, 
    ADDRESS   = {Berkeley, USA}, 
    MONTH     = {July},
    DOI       = {10.15607/RSS.2014.X.006} 
}