1. The Hessian Penalty: A Weak Prior for Unsupervised Disentanglement
William Peebles, John Peebles, Jun-Yan Zhu, Alexei Efros, Antonio Torralba
Existing disentanglement methods for deep generative models rely on hand-picked priors and complex encoder-based architectures. In this paper, we propose the Hessian Penalty, a simple regularization term that encourages the Hessian of a generative model with respect to its input to be diagonal. We introduce a model-agnostic, unbiased stochastic approximation of this term based on Hutchinson’s estimator to compute it efficiently during training. Our method can be applied to a wide range of deep generators with just a few lines of code. We show that training with the Hessian Penalty often causes axis-aligned disentanglement to emerge in latent space when applied to ProGAN on several datasets. Additionally, we use our regularization term to identify interpretable directions in BigGAN’s latent space in an unsupervised fashion. Finally, we provide empirical evidence that the Hessian Penalty encourages substantial shrinkage when applied to over-parameterized latent spaces.
The Hessian Penalty: A Weak Prior for Unsupervised Disentanglement
— AK (@ak92501) August 25, 2020
pdf: https://t.co/hDferVdWG4
abs: https://t.co/MrD6jIsTQ0
project page: https://t.co/XMMU5OTedj
github: https://t.co/YlgT4C5UzV pic.twitter.com/VMTBkBs5UG
2. Semantic View Synthesis
Hsin-Ping Huang, Hung-Yu Tseng, Hsin-Ying Lee, Jia-Bin Huang
We tackle a new problem of semantic view synthesis — generating free-viewpoint rendering of a synthesized scene using a semantic label map as input. We build upon recent advances in semantic image synthesis and view synthesis for handling photographic image content generation and view extrapolation. Direct application of existing image/view synthesis methods, however, results in severe ghosting/blurry artifacts. To address the drawbacks, we propose a two-step approach. First, we focus on synthesizing the color and depth of the visible surface of the 3D scene. We then use the synthesized color and depth to impose explicit constraints on the multiple-plane image (MPI) representation prediction process. Our method produces sharp contents at the original view and geometrically consistent renderings across novel viewpoints. The experiments on numerous indoor and outdoor images show favorable results against several strong baselines and validate the effectiveness of our approach.
Semantic View Synthesis
— AK (@ak92501) August 25, 2020
pdf: https://t.co/AlCJZezD8d
abs: https://t.co/bL0nnp9oIh
project page: https://t.co/gthjh8tf6R
github: https://t.co/oUR10YksOw pic.twitter.com/iw9XlRMPXc
3. Generate High Resolution Images With Generative Variational Autoencoder
Abhinav Sagar
In this work, we present a novel neural network to generate high resolution images. We replace the decoder of VAE with a discriminator while using the encoder as it is. The encoder uses data from a normal distribution while the generator from a gaussian distribution. The combination from both is given to a discriminator which tells whether the generated images are correct or not. We evaluate our network on 3 different datasets: MNIST, LSUN and CelebA-HQ dataset. Our network beats the previous state of the art using MMD, SSIM, log likelihood, reconstruction error, ELBO and KL divergence as the evaluation metrics while generating much sharper images. This work is potentially very exciting as we are able to combine the advantages of generative models and inference models in a principled bayesian manner.
Generate High Resolution Images With Generative Variational Autoencoder
— AK (@ak92501) August 25, 2020
pdf: https://t.co/krzKlKCphQ
abs: https://t.co/Yq7GRebhVg
github: https://t.co/pSsAmgD30B pic.twitter.com/vaen2HrF85
4. CA-GAN: Weakly Supervised Color Aware GAN for Controllable Makeup Transfer
Robin Kips, Pietro Gori, Matthieu Perrot, Isabelle Bloch
While existing makeup style transfer models perform an image synthesis whose results cannot be explicitly controlled, the ability to modify makeup color continuously is a desirable property for virtual try-on applications. We propose a new formulation for the makeup style transfer task, with the objective to learn a color controllable makeup style synthesis. We introduce CA-GAN, a generative model that learns to modify the color of specific objects (e.g. lips or eyes) in the image to an arbitrary target color while preserving background. Since color labels are rare and costly to acquire, our method leverages weakly supervised learning for conditional GANs. This enables to learn a controllable synthesis of complex objects, and only requires a weak proxy of the image attribute that we desire to modify. Finally, we present for the first time a quantitative analysis of makeup style transfer and color control performance.
CA-GAN: Weakly Supervised Color Aware GAN for Controllable Makeup Transfer
— AK (@ak92501) August 25, 2020
pdf: https://t.co/koLLPww9nM
abs: https://t.co/f7EJiC44iU
project page: https://t.co/xWiA4QVV1t pic.twitter.com/iiWlqUxS8K
5. Self-Supervised Learning for Large-Scale Unsupervised Image Clustering
Evgenii Zheltonozhskii, Chaim Baskin, Alex M. Bronstein, Avi Mendelson
Unsupervised learning has always been appealing to machine learning researchers and practitioners, allowing them to avoid an expensive and complicated process of labeling the data. However, unsupervised learning of complex data is challenging, and even the best approaches show much weaker performance than their supervised counterparts. Self-supervised deep learning has become a strong instrument for representation learning in computer vision. However, those methods have not been evaluated in a fully unsupervised setting. In this paper, we propose a simple scheme for unsupervised classification based on self-supervised representations. We evaluate the proposed approach with several recent self-supervised methods showing that it achieves competitive results for ImageNet classification (39% accuracy on ImageNet with 1000 clusters and 46% with overclustering). We suggest adding the unsupervised evaluation to a set of standard benchmarks for self-supervised learning. The code is available at https://github.com/Randl/kmeans_selfsuper
Self-supervised learning is really hot now. In our new paper (https://t.co/n8pyI5CS3V) with @ChaimBaskin Alex Bronstein and Avi Mendelson we study self-supervised learning in unsupervised clustering settings. The code is available at https://t.co/dGekFTW962 1/n
— Evgenii Zheltonozhskii (@evgeniyzhe) August 25, 2020
6. Hierarchical Style-based Networks for Motion Synthesis
Jingwei Xu, Huazhe Xu, Bingbing Ni, Xiaokang Yang, Xiaolong Wang, Trevor Darrell
Generating diverse and natural human motion is one of the long-standing goals for creating intelligent characters in the animated world. In this paper, we propose a self-supervised method for generating long-range, diverse and plausible behaviors to achieve a specific goal location. Our proposed method learns to model the motion of human by decomposing a long-range generation task in a hierarchical manner. Given the starting and ending states, a memory bank is used to retrieve motion references as source material for short-range clip generation. We first propose to explicitly disentangle the provided motion material into style and content counterparts via bi-linear transformation modelling, where diverse synthesis is achieved by free-form combination of these two components. The short-range clips are then connected to form a long-range motion sequence. Without ground truth annotation, we propose a parameterized bi-directional interpolation scheme to guarantee the physical validity and visual naturalness of generated results. On large-scale skeleton dataset, we show that the proposed method is able to synthesise long-range, diverse and plausible motion, which is also generalizable to unseen motion data during testing. Moreover, we demonstrate the generated sequences are useful as subgoals for actual physical execution in the animated world.
Hierarchical Style-based Networks for Motion Synthesis
— AK (@ak92501) August 25, 2020
pdf: https://t.co/uaPtSFuA2F
abs: https://t.co/DVaBZEdXGo
project page: https://t.co/wIuyxXo37E pic.twitter.com/Q5DPBSTgHV