Scale-aware Black-and-White Abstraction of 3D Shapes

ACM Transactions on Graphics (Proc. of SIGGRAPH 2018)

Abstract

Flat design is a modern style of graphics design that minimizes the number of design attributes required to convey 3D shapes. This approach suits design contexts requiring simplicity and efficiency, such as mobile computing devices. This 'less-is-more' design inspiration has posed significant challenges in practice since it selects from a restricted range of design elements (e.g., color and resolution) to represent complex shapes. In this work, we investigate a means of computationally generating a specialized 2D flat representation - image formed by black-and-white patches - from 3D shapes. We present a novel framework that automatically abstracts 3D man-made shapes into 2D binary images at multiple scales. Based on a set of identified design principles related to the inference of geometry and structure, our framework jointly analyzes the input 3D shape and its counterpart 2D representation, followed by executing a carefully devised layout optimization algorithm. The robustness and effectiveness of our method are demonstrated by testing it on a wide variety of man-made shapes and comparing the results with baseline methods via a pilot user study. We further present two practical applications that are likely to benefit from our work.


Algorithm



Overview of the proposed framework. Given an input 3D shape (a), our framework first performs joint 2D/3D shape analysis to encode geometric and structural properties into a shape graph and a patch graph (b). An abstraction layout optimization is then conducted according to design principles (c) to generate 2D black-and-white abstraction (d). For clarity, we only illustrate a subset of graph nodes in the shape graph and patch graph, and the edge direction in patch graph is not shown.

Results



Acknowledgement

We are grateful to the anonymous reviewers for their comments and suggestions. We also thank all the anonymous users for participating the user study. The work was supported in part by the Ministry of Science and Technology of Taiwan (106-3114-E-007-008 and 105-2221-E-007-104-MY2), and CAMERA, the RCUK Centre for the Analysis of Motion, Entertainment Research and Applications, EP/M023281/1.

Links