Annotating RGBD Images of Indoor Scenes

Yu-Shiang Wong
National Tsing Hua University
Hung-Kuo Chu
National Tsing Hua University

Abstract

Annotating RGBD images with high quality semantic annotations  plays a crucial key to the advanced scene understanding and image manipulation. While the popularity of affordable RGBD sensors has eased the process to acquire RGBD images, annotating them, automatically or manually, is still a challenging task. State-of-the-art annotation tools focus only on 2D operations and provide at most image segmentation and object labels even  with the presence of depth data. In this work, we present an interactive system to exploit both color and depth cues and facilitate annotating RGBD images with image and scene level segmentation, object labels and 3D geometry and structures. With our system, the users only have to provide few scribbles to identify object instances and specify the label and support relationships of objects, while the system performs those tedious tasks of segmenting image and estimating the 3D cuboids. We test the system on a subset of benchmark RGBD dataset and demonstrate that our system provides a convenient way to generate a baseline dataset with rich semantic annotations.



Results


Five examples of annotated indoor scenes. The left column shows the input RGBD images. The middle column presents the annotated labels and 3D cuboids of objects. The support hierarchy among objects is shown in the right column, where the yellow, purple and light blue arrows indicate support relationships with floor, wall and other objects, respectively.

Acknowledgement

We are grateful to the anonymous reviewers for their comments and suggestions; The project was supported in part by the National Science Council of Taiwan (NSC-102-2221-E-007-055-MY3).

Bibtex

@inproceedings{Wong:Anno:ISD14,
  author={Wong, Yu-Shiang and Chu, Hung-Kuo},
  title={Annotating RGBD Images of Indoor Scenes},
  booktitle = {SIGGRAPH Asia 2014 Workshop on Indoor Scene Understanding: Where Graphics meets Vision},
  year={2014}
}

Links