Synthesizing Emerging Images from Photographs

Cheng-Han Yang
Ying-Miao Kuo

Abstract

Emergence is the visual phenomenon by which humans recognize the objects in a seemingly noisy image through aggregating information from meaningless pieces and perceiving a whole that is meaningful.Such an unique mental skill renders emergence an effective scheme to tell humans and machines apart.Images that are detectable by human but difficult for an automatic algorithm to recognize are also referred as emerging images.A recent state-of-the-art work proposes to synthesize images of 3D objects that are detectable by human but difficult for an automatic algorithm to recognize.Their results are further verified to be easy for humans to recognize while posing a hard time for automatic machines.However, using 3D objects as inputs prevents their system from being practical and scalable for generating an infinite number of high quality images.For instance, the image quality may degrade quickly as the viewing and lighting conditions changing in 3D domain, and the available resources of 3D models are usually limited.However, using 3D objects as inputs brings drawbacks.For instance, the quality of results is sensitive to the viewing and lighting conditions in the 3D domain.The available resources of 3D models are usually limited, and thus restricts the scalability.This paper presents a novel synthesis technique to automatically generate emerging images from regular photographs, which are commonly taken with decent setting and widely accessible online.We adapt the previous system to the 2D setting of input photographs and develop a set of image-based operations.Our algorithm is also designed to support the difficulty level control of resultant images through a limited set of parameters. We conducted several experiments to validate the efficacy and efficiency of our system.


Results



Links