Camouflage Images
National Cheng-Kung University
National Cheng-Kung University
IIT Delhi
Tel Aviv University
The Chinese University of Hong Kong
National Cheng-Kung University
Abstract
Camouflage images contain one or more hidden figures that remain imperceptible or unnoticed for a while. In one possible explanation, the ability to delay the perception of the hidden figures is attributed to the theory that human perception works in two main phases: feature search and conjunction search. Effective camouflage images make feature based recognition difficult, and thus force the recognition process to employ conjunction search, which takes considerable effort and time. In this paper, we present a technique for creating camouflage images. To foil the feature search, we remove the original subtle texture details of the hidden figures and replace them by that of the surrounding apparent image. To leave an appropriate degree of clues for the conjunction search, we compute and assign new tones to regions in the embedded figures by performing an optimization between two conflicting terms, which we call immersion and standout, corresponding to hiding and leaving clues, respectively. We show a large number of camouflage images generated by our technique, with or without user guidance. We have tested the quality of the images in an extensive user study, showing a good control of the difficulty levels.
Results
Video
Acknowledgement
This work is supported in part by the Landmark Program of the NCKU Top University Project (contract B0008), the National Science Council (contracts NSC-97-2628-E-006-125-MY3 and NSC- 96-2628-E-006-200-MY3) Taiwan, the Israel Science Foundation and the Research Grants Council of the Hong Kong SAR under General Research Fund (CUHK417107). Niloy was partially supported by a Microsoft Outstanding Young Faculty Fellowship. We thank Chung-Ren Yan for his valuable comments on texture synthesis, Kun-Chuan Feng for helping to design the user study, and Johathan Balzer for the video voice-over. We sincerely thank all the participants of the user study for their time and useful feedback. We thank Steven Michael Gardner, John Van Straalen for granting permission to use their artworks, and Aleta A. Rodriguez, Krum Sergeev, Alexander Stross and Joel Antunes for the photographs used in our examples. Finally, we are grateful to the anonymous reviewers for their comments and suggestions.