Honors & Awards

Admin

+ Follow

Hsiao, Jen Chen

Master

+ Follow

Chen,Yung-An

Master

+ Follow

Fu, Ching-Hua

Master

+ Follow

Kuo, Jessie

Master

+ Follow

Hsiao, Kai-Wen

Master

+ Follow

OU YANG,FEI-HONG

Master

+ Follow

Lo,Chin-Chieh

Master

+ Follow

Lin,Yu-Jui

Master

+ Follow

Lee, Ruen-Rone

Alumni Advisor

+ Follow

Kang, Ming Hsi

Master

+ Follow

Chang,Hung-Jui

Master

+ Follow

Lin, Wen-sheng

Master

+ Follow

Chang,Shi-Xiu

Master

+ Follow

Chang, Chia-Sheng

PhD

+ Follow

Yang,Sun-Da

Master

+ Follow

Chen,Kuan-Ting

Master

+ Follow

Lin, You-En

Master

+ Follow

Joseph, Tien

Master

+ Follow

Year:

2017

User-Guided Line Abstraction Using Coherence and Structure Analysis
Hui-Chi Tsai , Ya-Hsuan Lee , Ruen-Rone Lee , Hung-Kuo Chu

Computational Visual Media 2017 , Computational Visual Media

Abstract

Line drawing is a style of image abstraction where the perception of image is conveyed using distinct straight or curved lines. However, extracting semantically salient lines is not trivial and mastered only by skilled artists. While many parametric filters have successfully extracted accurate and coherent lines, their results are sensitive to parameters tuning and easily leading to either excessive or insufficient amount of lines. In this work, we present an interactive system to generate concise line abstraction of arbitrary images via a few user specified strokes. Specifically, the user simply has to provide a few intuitive strokes on the input images, including tracing roughly along the edges and scribbling on the region of interest, through a sketching interface. The system then automatically extracts lines that are long, coherent and share similar textural structures from a corresponding highly detailed line drawing image. We have tested our system with a wide variety of images. The experimental results show that our system outperforms state-of-the-art techniques in terms of quality and efficiency.


AlphaRead: Support Unambiguous Referencing in Remote Collaboration with Readable Object Annotation
Yuan-Chia Chang , Hao-Chuan Wang , Hung-Kuo Chu , Shung-Ying Lin , Shuo-Ping Wang


[Paper]

2016

Interactive Videos: Plausible Video Editing using Sparse Structure Points
Chia-Sheng Chang , Hung-Kuo Chu , Niloy J. Mitra

Eurographics 2016 , Computer Graphics Forum

Abstract

Video remains the method of choice for capturing temporal events. However, without access to the underlying 3D scene models, it remains difficult to make object level edits in a single video or across multiple videos. While it may be possible to explicitly reconstruct the 3D geometries to facilitate these edits, such a workflow is cumbersome, expensive, and tedious. In this work, we present a much simpler workflow to create plausible editing and mixing of raw video footage using only sparse structure points (SSP) directly recovered from the raw sequences. First, we utilize user-scribbles to structure the point representations obtained using structure-from-motion on the input videos. The resultant structure points, even when noisy and sparse, are then used to enable various video edits in 3D, including view perturbation, keyframe animation, object duplication and transfer across videos, etc. Specifically, we describe how to synthesize object images from new views adopting a novel image-based rendering technique using the SSPs as proxy for the missing 3D scene information. We propose a structure-preserving image warping on multiple input frames adaptively selected from object video, followed by a spatio-temporally coherent image stitching to compose the final object image. Simple planar shadows and depth maps are synthesized for objects to generate plausible video sequence mimicking real-world interactions. We demonstrate our system on a variety of input videos to produce complex edits, which are otherwise difficult to achieve.

Abstract

Ambiguous figure-ground images, mostly represented as binary images, are fascinating as they present viewers a visual phenomena of perceiving multiple interpretations from a single image. In one possible interpretation, the white region is seen as a foreground figure while the black region is treated as shapeless background. Such perception can reverse instantly at any moment. In this paper, we investigate the theory behind this ambiguous perception and present an automatic algorithm to generate such images. We model the problem as a binary image composition using two object contours and approach it through a three-stage pipeline. The algorithm first performs a partial shape matching to find a good partial contour matching between objects. This matching is based on a content-aware shape matching metric, which captures features of ambiguous figure-ground images. Then we combine matched contours into a compound contour using an adaptive contour deformation, followed by computing an optimal cropping window and image binarization for the compound contour that maximize the completeness of object contours in the final composition. We have tested our system using a wide range of input objects and generated a large number of convincing examples with or without user guidance. The efficiency of our system and quality of results are verified through an extensive experimental study. 

A Simulation on Grass Swaying with Dynamic Wind Force
Yi Lo , Ruen-Rone Lee , Hung-Kuo Chu , Chun-Fa Chang


[Paper] [Video]
Synthesizing Emerging Images from Photographs
Cheng-Han Yang , Ying-Miao Kuo , Hung-Kuo Chu


Abstract

Emergence is the visual phenomenon by which humans recognize the objects in a seemingly noisy image through aggregating information from meaningless pieces and perceiving a whole that is meaningful.Such an unique mental skill renders emergence an effective scheme to tell humans and machines apart.Images that are detectable by human but difficult for an automatic algorithm to recognize are also referred as emerging images.A recent state-of-the-art work proposes to synthesize images of 3D objects that are detectable by human but difficult for an automatic algorithm to recognize.Their results are further verified to be easy for humans to recognize while posing a hard time for automatic machines.However, using 3D objects as inputs prevents their system from being practical and scalable for generating an infinite number of high quality images.For instance, the image quality may degrade quickly as the viewing and lighting conditions changing in 3D domain, and the available resources of 3D models are usually limited.However, using 3D objects as inputs brings drawbacks.For instance, the quality of results is sensitive to the viewing and lighting conditions in the 3D domain.The available resources of 3D models are usually limited, and thus restricts the scalability.This paper presents a novel synthesis technique to automatically generate emerging images from regular photographs, which are commonly taken with decent setting and widely accessible online.We adapt the previous system to the 2D setting of input photographs and develop a set of image-based operations.Our algorithm is also designed to support the difficulty level control of resultant images through a limited set of parameters. We conducted several experiments to validate the efficacy and efficiency of our system.

Feature-Aware Pixel Art Animation
Ming-Hsun Kuo , Yongliang Yang , Hung-Kuo Chu

Pacific Graphics 2016 , Computer Graphics Forum

Abstract

Pixel art is a modern digital art in which high resolution images are abstracted into low resolution pixelated outputs using concise outlines and reduced color palettes. Creating pixel art is a labor intensive and skill-demanding process due to the challenge of using limited pixels to represent complicated shapes. Not surprisingly, generating pixel art animation is even harder given the additional constraints imposed in the temporal domain. Although many powerful editors have been designed to facilitate the creation of still pixel art images, the extension to pixel art animation remains an unexplored direction. Existing systems typically request users to craft individual pixels frame by frame, which is a tedious and error-prone process. In this work, we present a novel animation framework tailored to pixel art images. Our system bases on conventional key-frame animation framework and state-of-the-art image warping techniques to generate an initial animation sequence. The system then jointly optimizes the prominent feature lines of individual frames respecting three metrics that capture the quality of the animation sequence in both spatial and temporal domains. We demonstrate our system by generating visually pleasing animations on a variety of pixel art images, which would otherwise be difficult by applying state-of-the-art techniques due to severe artifacts.

2015

SMARTANNOTATOR: An Interactive Tool for Annotating Indoor RGBD Images
Yu-Shiang Wong , Hung-Kuo Chu , Niloy J. Mitra

Eurographics 2015 , Computer Graphics Forum

Abstract

RGBD images with high quality annotations, both in the form of geometric (i.e., segmentation) and structural (i.e., how do the segments mutually relate in 3D) information, provide valuable priors for a diverse range of applications in scene understanding and image manipulation. While it is now simple to acquire RGBD images, annotating them, automatically or manually, remains challenging. We present SMARTANNOTATOR, an interactive system to facilitate annotating raw RGBD images. The system performs the tedious tasks of grouping pixels, creating potential abstracted cuboids, inferring object interactions in 3D, and generates an ordered list of hypotheses. The user simply has to flip through the suggestions for segment labels, finalize a selection, and the system updates the remaining hypotheses. As annotations are finalized, the process becomes simpler with fewer ambiguities to resolve. Moreover, as more scenes are annotated, the system makes better suggestions based on the structural and geometric priors learned from previous annotation sessions. We test the system on a large number of indoor scenes across different users and experimental settings, validate the results on existing benchmark datasets, and report significant improvements over low-level annotation alternatives.


Tone- and Feature-Aware Circular Scribble Art
Chun-Chia Chiu , Yi-Hsiang Lo , Ruen-Rone Lee , Hung-Kuo Chu

Pacific Graphics 2015 , Computer Graphics Forum

Abstract

Circular scribble art is a kind of line drawing where the seemingly random, noisy and shapeless circular scribbles at microscopic scale constitute astonishing grayscale images at macroscopic scale. Such a delicate skill has rendered the creation of circular scribble art a tedious and time-consuming task even for gifted artists. In this work, we present a novel method for automatic synthesis of circular scribble art. The synthesis problem is modeled as tracing along a virtual path using a parametric circular curve. To reproduce the tone and important edge structure of input grayscale images, the system adaptively adjusts the density and structure of virtual path, and dynamically controls the size, drawing speed and orientation of parametric circular curve during the synthesis. We demonstrate the potential of our system using several circular scribble images synthesized from a wide variety of grayscale images. A preliminary experimental studying is conducted to qualitatively and quantitatively evaluate our method. Results report that our method is efficient and generates convincing results comparable to artistic artworks.

Continuous Circular Scribble Arts
Chun-Chia Chiu , Yi-Hsiang Lo , Wei-Ting Ruan , Cheng-Han Yang , Ruen-Rone Lee , Hung-Kuo Chu


Abstract

A systematic approach to automatically synthesize scribble art with respect to an input image by a single continuous circular scribble. The results are similar to the artworks, which use circular scribbles to imitate the shape, features, and luminance differences created by skilled artists.

PIXEL2BRICK: Constructing Brick Sculptures from Pixel Art
Ming-Hsun Kuo , You-En Lin , Hung-Kuo Chu , Ruen-Rone Lee , Yongliang Yang

Pacific Graphics 2015 , Computer Graphics Forum

Abstract

LEGO, a popular brick-based toy construction system, provides an affordable and convenient way of fabricating geometric shapes. However, building arbitrary shapes using LEGO bricks with restrictive colors and sizes is not trivial. It requires careful design process to produce appealing, stable and constructable brick sculptures. In this work, we investigate the novel problem of constructing brick sculptures from pixel art images. In contrast to previous efforts that focus on 3D models, pixel art contains rich visual contents for generating engaging LEGO designs. On the other hand, the characteristics of pixel art and corresponding brick sculpture pose new challenges to the design process. We propose a novel computational framework to automatically construct brick sculptures from pixel arts. This is based on implementing a set of design guidelines concerning the visual quality as well as the structural stability of built sculptures. We demonstrate the effectiveness of our framework with various bricks sculptures (both real and virtual) generated from a variety of pixel art images. Experimental results show that our system is efficient and gains significant improvements over state-of-the-arts.

Court Reconstruction for Camera Calibration in Broadcast Basketball Videos
Pei-Chih Wen , Wei-Chih Cheng , Yu-Shuen Wang , Hung-Kuo Chu , Nick Tang , Hong-Yuan Mark Liao

IEEE Transactions on Visualization and Computer Graphics (TVCG)

[Paper] [Video]
Spatio-Temporal Learning of Basketball Offensive Strategies

ACM Multimedia Conference 2015 (Short Papers)

[Paper]

2014

Abstract

Annotating RGBD images with high quality semantic annotations  plays a crucial key to the advanced scene understanding and image manipulation. While the popularity of affordable RGBD sensors has eased the process to acquire RGBD images, annotating them, automatically or manually, is still a challenging task. State-of-the-art annotation tools focus only on 2D operations and provide at most image segmentation and object labels even  with the presence of depth data. In this work, we present an interactive system to exploit both color and depth cues and facilitate annotating RGBD images with image and scene level segmentation, object labels and 3D geometry and structures. With our system, the users only have to provide few scribbles to identify object instances and specify the label and support relationships of objects, while the system performs those tedious tasks of segmenting image and estimating the 3D cuboids. We test the system on a subset of benchmark RGBD dataset and demonstrate that our system provides a convenient way to generate a baseline dataset with rich semantic annotations.


Image-based Paper Pop-up Design
Chen Liu , Yong-Liang Yang , Ya-Hsuan Lee , Hung-Kuo Chu


Abstract

An Origamic Architecture (OA), originally introduced by Masahiro Chatani in 1980, is a design of cuts and folds on a single piece of paper. Due to rigid paper crafting constraints, the OA design process is often time consuming and requires considerable skills. Several computer-aided design tools have been developed to provide a virtual design environment and assist the design process. However, the ultimate placement of cuts and folds still depends on the user, posing the design process troublesome and highly skill-demanding. Unlike previous work where OA designs approximate 3D models, we use 2D images as input and automatically generate OA designs from 2D shapes.

Anamorphic Image Generation Using Hybrid Texture Synthesis
Chih-Kuo Yeh , Hung-Kuo Chu , Min-Jen Chang , Tong-Yee Lee

Journal of Information Science and Engineering (JISE)

[Paper]
Figure-Ground Image Generation using Contour Matching and Rigid Shape Deformation
Pei-Ke Chen , Hung-Kuo Chu , Chih-Kuo Yeh , Tong-Yee Lee


2013

Halftone QR Codes
Hung-Kuo Chu , Chia-Sheng Chang , Ruen-Rone Lee , Niloy J. Mitra

SIGGRAPH Asia 2013 , ACM Transactions on Graphics

Abstract

QR code is a popular form of barcode pattern that is ubiquitously used to tag information to products or for linking advertisements. While, on one hand, it is essential to keep the patterns machine readable; on the other hand, even small changes to the patterns can easily render them unreadable. Hence, in absence of any computational support, such QR codes appear as random collections of black/white modules, and are often visually unpleasant. We propose an approach to produce high quality visual QR codes, which we call halftone QR codes, that are still machine-readable. First, we build a pattern readability function wherein we learn a probability distribution of what modules can be replaced by which other modules. Then, given a text tag, we express the input image in terms of the learned dictionary to encode the source text. We demonstrate that our approach produces high quality results on a range of inputs and under different distortion effects.

Emerging Images Synthesis from Photographs
Mao-Fong Jian , Hung-Kuo Chu , Ruen-Rone Lee , Chia-Lun Ku , Yu-Shuen Wang , Chih-Yuan Yao


Abstract

In this work, we propose an automatic algorithm to synthesize emerging images from regular photographs. To generate images that are easy for human, rendered complex splats that capture silhouette and shading information of 3D objects.However, we realize that comparative information could be retrieved from photographs as well and replace the rendering of black complex splats with superpixels. They further take two post processing steps to make segmentation harder for bots, and both of them could find counterpart operations in image domain. Supporting by public image databases such as flickr and Picasa, we can envision a potential CAPTCHA application of our approach to massively and efficiently generate emerging images from photographs.

2010

Camouflage Images
Hung-Kuo Chu , Wei-Hsin Hsu , Niloy J. Mitra , Daniel Cohen-Or , Tien-Tsin Wong , Tong-Yee Lee

SIGGRAPH 2010 , ACM Transactions on Graphics

Abstract

Camouflage images contain one or more hidden figures that remain imperceptible or unnoticed for a while. In one possible explanation, the ability to delay the perception of the hidden figures is attributed to the theory that human perception works in two main phases: feature search and conjunction search. Effective camouflage images make feature based recognition difficult, and thus force the recognition process to employ conjunction search, which takes considerable effort and time. In this paper, we present a technique for creating camouflage images. To foil the feature search, we remove the original subtle texture details of the hidden figures and replace them by that of the surrounding apparent image. To leave an appropriate degree of clues for the conjunction search, we compute and assign new tones to regions in the embedded figures by performing an optimization between two conflicting terms, which we call immersion and standout, corresponding to hiding and leaving clues, respectively. We show a large number of camouflage images generated by our technique, with or without user guidance. We have tested the quality of the images in an extensive user study, showing a good control of the difficulty levels.

2009

Abstract

Emergence refers to the unique human ability to aggregate informationfrom seemingly meaningless pieces, and to perceive a wholethat is meaningful. This special skill of humans can constitute aneffective scheme to tell humans and machines apart. This paperpresents a synthesis technique to generate images of 3D objects thatare detectable by humans, but difficult for an automatic algorithmto recognize. The technique allows generating an infinite numberof images with emerging figures. Our algorithm is designed so thatlocally the synthesized images divulge little useful information orcues to assist any segmentation or recognition procedure. Therefore,as we demonstrate, computer vision algorithms are incapableof effectively processing such images. However, when a human observeris presented with an emergence image, synthesized using anobject she is familiar with, the figure emerges when observed as awhole. We can control the difficulty level of perceiving the emergenceeffect through a limited set of parameters. A procedure thatsynthesizes emergence images can be an effective tool for exploringand understanding the factors affecting computer vision techniques. 

Multi-Resolution Mean Shift Clustering Algorithm for Shape Interpolation

IEEE Transactions on Visualization and Computer Graphics (TVCG)

Compatible Quadrangulation By Sketching
Chih-Yuan Yao , Hung-Kuo Chu , Tao Ju , Tong-Yee Lee


[Paper] [Video]

2008

Example-based Deformation Transfer for 3D Polygon Models
Hung-Kuo Chu , Chao-Hung Lin

Journal of Information Science and Engineering (JISE)

[Paper]
Skeleton Extraction by Mesh Contraction

SIGGRAPH 2008 , ACM Transactions on Graphics

Abstract

Extraction of curve-skeletons is a fundamental problem with many applications in computer graphics and visualization. In this paper, we present a simple and robust skeleton extraction method based on mesh contraction. The method works directly on the mesh domain, without pre-sampling the mesh model into a volumetric representation. The method first contracts the mesh geometry into a zero-volume skeletal shape by applying implicit Laplacian smoothing with global positional constraints. The contraction does not alter the mesh connectivity and retains the key features of the original mesh. The contracted mesh is then converted into a 1D curve-skeleton through a connectivity surgery process to remove all the collapsed faces while preserving the shape of the contracted mesh and the original topology. The centeredness of the skeleton is refined by exploiting the induced skeleton-mesh mapping. The contraction process generates valuable information about the object's geometry, in particular, the skeleton-vertex correspondence and the local thickness, which are useful for various applications. We demonstrate its effectiveness in mesh segmentation and skinning animation.

2007

Mesh Pose-Editing Using Examples
Tong-Yee Lee , Chao-Hung Lin , Hung-Kuo Chu , Yu-Shuen Wang , Shao-Wei Yen , Chang-Rung Tsai


[Paper]

2006

Generating Genus-N-To-M Mesh Morphing Using Spherical Parameterization
Tong-Yee Lee , Chih-Yuan Yao , Hung-Kuo Chu , Ming-Jen Tai , Cheng-Chieh Chen


[Paper]

2005

Progressive Mesh Metamorphosis: Animating Geometrical Models
Chao-Hung Lin , Tong-Yee Lee , Hung-Kuo Chu , Zhi-Yuan Yao

Computer Animation and Social Agents 2005

[Paper]

Teaching History

  • 2015 Fall

    Introduction to Game Programming

    Game development is a hot topic in the modern entertainment industry. This course is divided into two major parts. In the 1st part, we present basic programming skills and resources required for beginners who want to experience game development. Specifically, we present the usage of OpenGL APIs, one of the most popular graphics programming languages. In the 2nd part, we give a training course of a well-known game engine, Unity3D which is capable of creating a fancy game quickly and intuitively.

  • 2015 Spring

    Non-Photo-Realistic Rendering:Theory and Applications

    Non-Photo-Realistic (NPR) rendering is an extended studied topic in computer graphics community. The main idea is to produce images of esthetic form with specific artistic style from either 2D or 3D media. Well known techniques include sketch simulation, water color painting simulation and illusory art reproduction. Among of them, the reproduction of illusory art has become an active research topic in a recent decade. In this course, students will be given an introduction of NPR history followed by studies of newly developed illusory art reproduction techniques. Students are asked to survey and study papers of top conference related to NPR rendering and give a presentation weekly. At the end of the course, each student requires implementing technique of one paper and presents the system in the class.

  • 2015 Spring

    Computer Graphics

    This course is about the programming of 3D computer graphics. During the first half of this course, we will focus on the high-level programming of 3D graphics applications using the OpenGL API. (This approach, as the author of the first reference book describes it, is like leaning to drive a car without having to know what's under the hood.) Then, during the second half of this course, we will study the whole process of a 3D renderer, which we will implement as parts of the assignments. If time allows, we will also cover topics of texture mapping, curve surfaces, global illumination, ...etc.

  • 2014 Fall

    Introduction to Game Programming

    Game development is a hot topic in the modern entertainment industry. This course is divided into two major parts. In the 1st part, we present basic programming skills and resources required for beginners who want to experience game development. Specifically, we present the usage of OpenGL APIs, one of the most popular graphics programming languages. In the 2nd part, we give a training course of a well-known game engine, Unity3D which is capable of creating a fancy game quickly and intuitively.

  • 2016 Spring

    Introduction to Graphics Programming and its Applications

    This course is an extension to application programming. Specifically, it is focusing on graphics application programming. Before get into the detail algorithms and the operations behind the mystery of Computer Graphics, it is possible to write some basic graphics application programs. Similar to Windows programming that utilizes some Windows APIs to achieve the control of Windows applications, Graphics programming is using some Graphics APIs, such as OpenGL, to achieve the processing of graphics applications. By understanding the physical meanings and the control of each parameter in the Graphics API, without knowing the true implementation behind it, we can write some programs which utilizing the Graphics APIs and deriving some nice rendering results with proper assignment and control to the required parameters.

    In this course, OpenGL Graphics API will be introduced for the illustration of examples throughout the class. It is adopted due to OpenGL has been designed to be a cross-platform Graphics API running on PCs and mobile devices. Although OpenGL ES (OpenGL for Embedded System) is widely adopted as the standard Graphics API for mobile devices, it is actually consisting of well-defined subsets of desktop OpenGL. So, for students who learn OpenGL Graphics programming will benefit from writing Graphics applications in not only the PC platforms but also in many other mobile platforms as well.

  • 2016 Fall

    Introduction to Game Programming

    Game development is a hot topic in the modern entertainment industry. This course is divided into two major parts. In the 1st part, we present basic programming skills and resources required for beginners who want to experience game development. Specifically, we present the usage of OpenGL APIs, one of the most popular graphics programming languages. In the 2nd part, we give a training course of a well-known game engine, Unity3D which is capable of creating a fancy game quickly and intuitively.

Office

Department of Computer Science
National Tsing Hua University
No. 101, Section 2, Kuang-Fu Road, Hsinchu, Taiwan 30013
Room 641, Delta Building