Hign Resolution Face Completion
High Resolution Face Completion with Multiple Controllable Attributes via Fully End-to-End Progressive GANs
Zeyuan Chen, Shaoliang Nie, Tianfu Wu, and Christopher G. Healey [paper]
We present a deep learning approach for high resolution face completion with multiple controllable attributes (e.g., male and smiling) under arbitrary masks. We show that our system can complete faces with large structural and appearance variations using a single feed-forward pass of computation with mean inference time of 0.007 seconds for images at 1024 × 1024 resolution. We also perform a pilot human study that shows our approach outperforms state-of-the-art face completion methods in terms of rank analysis.
This figure shows face completion results of our method on CelebA-HQ. Images in the left most column of each group are masked with gray color, while the rest are synthesized faces. Top: our approach can complete face images at high resolution (1024 × 1024). Bottom: the attributes of completed faces can be controlled by conditional vectors. Attributes [“Male”, “Smiling”] are used in this example. The conditional vectors of column two to five are [0, 0], [1, 0], [0, 1], and [1, 1] in which “1” denotes the generated images have the particular attribute while “0” denotes not. Images are at 512 × 512 resolution. All images best viewed enlarged.
Video Demo: Attribute Controller
In addition to controlling if an attribute exists in the synthesized content with binary values (i.e. “0” or “1”), we can control the subtle appearances and facial expressions with interpolated values from zero to one. Attributes in the demo: [Male, Smiling].
Large Scale Image Collection Visualization
Large Image Collection Visualization Using Perception-Based Similarity with Color Features
Zeyuan Chen and Christopher G. Healey,
in International Symposium on Visual Computing, pp. 379–390. Springer, 2016. [paper]
This paper introduces the basic steps to build a similarity-based visualization tool for large image collections. We build the similarity metrics based on human perception. Psychophysical experiments have shown that human observers can recognize the gist of scenes within 100 milliseconds (msec) by comprehending the global properties of an image. Color also plays an important role in human rapid scene recognition. However, previous works often neglect color features. We propose new scene descriptors that preserve the information from coherent color regions, as well as the spatial layouts of scenes. Experiments show that our descriptors outperform existing state-of-the-art approaches. Given the similarity metrics, a hierarchical structure of an image collection can be built in a top-down manner. Representative images are chosen for image clusters and visualized using a force-directed graph.
Planar Object Recognition and Pose Estimation
Recognize planar images and estimate their pose with respect to the ground.
We combine the estimated pose and the gravity sensor of iPhone to compute the planar object’s pose with respect to the ground.