Camouflage Images

ACM Transactions on Graphics (Proc. of SIGGRAPH 2010)


Camouflage images contain one or more hidden figures that remain imperceptible or unnoticed for a while. In one possible explanation, the ability to delay the perception of the hidden figures is attributed to the theory that human perception works in two main phases: feature search and conjunction search. Effective camouflage images make feature based recognition difficult, and thus force the recognition process to employ conjunction search, which takes considerable effort and time. In this paper, we present a technique for creating camouflage images. To foil the feature search, we remove the original subtle texture details of the hidden figures and replace them by that of the surrounding apparent image. To leave an appropriate degree of clues for the conjunction search, we compute and assign new tones to regions in the embedded figures by performing an optimization between two conflicting terms, which we call immersion and standout, corresponding to hiding and leaving clues, respectively. We show a large number of camouflage images generated by our technique, with or without user guidance. We have tested the quality of the images in an extensive user study, showing a good control of the difficulty levels.


(Top) Two camouflage images produced by our technique. The left and right images have seven and four camouflaged objects, respectively,at various levels of difficulty. By removing distinguishable elements from the camouflaged objects we make feature search difficult, forcing the viewers to use conjunction search, a serial and delayed procedure. (Please zoom in for a better effect.)
(Left) Artist created camouflage images: (left) 8 eagles, and (right) 13 wolves are embedded (Copyright of Steven Gardner)

Mouse over the image to see embedded objects.

(Below) Results of camouflaging a lion onto a mountain backdrop using various methods: (left to right) alpha blending, Poisson cloning, texture transfer, Poisson cloning followed by texture transfer, and our method.
(Below) Three camouflage images created by our algorithm. (Mouse over the image to see embedded objects.)

(Top) Recognition time and success rates on three difficultylevels of generated camouflage images as observed in course of ouruser study (see Section 6).
(Top) Comparison of synthesized results (right) with artistgenerated ones (left).



This work is supported in part by the Landmark Program of the NCKU Top University Project (contract B0008), the National Science Council (contracts NSC-97-2628-E-006-125-MY3 and NSC- 96-2628-E-006-200-MY3) Taiwan, the Israel Science Foundation and the Research Grants Council of the Hong Kong SAR under General Research Fund (CUHK417107). Niloy was partially supported by a Microsoft Outstanding Young Faculty Fellowship. We thank Chung-Ren Yan for his valuable comments on texture synthesis, Kun-Chuan Feng for helping to design the user study, and Johathan Balzer for the video voice-over. We sincerely thank all the participants of the user study for their time and useful feedback. We thank Steven Michael Gardner, John Van Straalen for granting permission to use their artworks, and Aleta A. Rodriguez, Krum Sergeev, Alexander Stross and Joel Antunes for the photographs used in our examples. Finally, we are grateful to the anonymous reviewers for their comments and suggestions. 


 author = {Chu, Hung-Kuo and Hsu, Wei-Hsin and Mitra, Niloy J. and Cohen-Or, Daniel and Wong, Tien-Tsin and Lee, Tong-Yee},
 title = {Camouflage Images},
 journal = {ACM Trans. Graph. (Proc. SIGGRAPH)},
 volume = {29},
 number = {4},
 year = {2010},
 pages = {51:1--51:8},
 articleno = {51},
 numpages = {8}