1. HumanGAN: A Generative Model of Humans Images
Kripasindhu Sarkar, Lingjie Liu, Vladislav Golyanik, Christian Theobalt
Generative adversarial networks achieve great performance in photorealistic image synthesis in various domains, including human images. However, they usually employ latent vectors that encode the sampled outputs globally. This does not allow convenient control of semantically-relevant individual parts of the image, and is not able to draw samples that only differ in partial aspects, such as clothing style. We address these limitations and present a generative model for images of dressed humans offering control over pose, local body part appearance and garment style. This is the first method to solve various aspects of human image generation such as global appearance sampling, pose transfer, parts and garment transfer, and parts sampling jointly in a unified framework. As our model encodes part-based latent appearance vectors in a normalized pose-independent space and warps them to different poses, it preserves body and clothing appearance under varying posture. Experiments show that our flexible and general generative method outperforms task-specific baselines for pose-conditioned image generation, pose transfer and part sampling in terms of realism and output resolution.
HumanGAN: A Generative Model of Humans Images
— AK (@ak92501) March 15, 2021
pdf: https://t.co/azARdJ9qlA
abs: https://t.co/PEtACUcLWY pic.twitter.com/jsectwAJEg
HumanGAN https://t.co/T7dkM6jFuE (2019)
— Shinnosuke Takamichi (高道 慎之介) (@forthshinji) March 15, 2021
vs.
HumanGAN https://t.co/oXfNBcwPRh (2021) https://t.co/e4FEQ4XbUR
2. Probabilistic two-stage detection
Xingyi Zhou, Vladlen Koltun, Philipp Krähenbühl
We develop a probabilistic interpretation of two-stage object detection. We show that this probabilistic interpretation motivates a number of common empirical training practices. It also suggests changes to two-stage detection pipelines. Specifically, the first stage should infer proper object-vs-background likelihoods, which should then inform the overall score of the detector. A standard region proposal network (RPN) cannot infer this likelihood sufficiently well, but many one-stage detectors can. We show how to build a probabilistic two-stage detector from any state-of-the-art one-stage detector. The resulting detectors are faster and more accurate than both their one- and two-stage precursors. Our detector achieves 56.4 mAP on COCO test-dev with single-scale testing, outperforming all published results. Using a lightweight backbone, our detector achieves 49.2 mAP on COCO at 33 fps on a Titan Xp, outperforming the popular YOLOv4 model.
Probabilistic two-stage detection
— AK (@ak92501) March 15, 2021
pdf: https://t.co/AJ4m1cm95g
abs: https://t.co/HERcj5bNFw
github: https://t.co/U8poNvhe78 pic.twitter.com/udAiVtGpw7
Probabilistic two-stage detectionhttps://t.co/Y5LnYLYYoAhttps://t.co/iuOC2D1uoS
— phalanx (@ZFPhalanx) March 15, 2021
去年のやつが公開されてた pic.twitter.com/zNDi2vpQmh
3. Modern Dimension Reduction
Philip D. Waggoner
- retweets: 252, favorites: 63 (03/16/2021 09:09:32)
- links: abs | pdf
- cs.LG | cs.CY | stat.AP | stat.ML
Data are not only ubiquitous in society, but are increasingly complex both in size and dimensionality. Dimension reduction offers researchers and scholars the ability to make such complex, high dimensional data spaces simpler and more manageable. This Element offers readers a suite of modern unsupervised dimension reduction techniques along with hundreds of lines of R code, to efficiently represent the original high dimensional data space in a simplified, lower dimensional subspace. Launching from the earliest dimension reduction technique principal components analysis and using real social science data, I introduce and walk readers through application of the following techniques: locally linear embedding, t-distributed stochastic neighbor embedding (t-SNE), uniform manifold approximation and projection, self-organizing maps, and deep autoencoders. The result is a well-stocked toolbox of unsupervised algorithms for tackling the complexities of high dimensional data so common in modern society. All code is publicly accessible on Github.
Looks like a really cool intro to unsupervised methods, with a focus on the social sciences! "Modern Dimension Reduction" by Philip D. Waggoner: https://t.co/UqDLrijJXl; and there's code (https://t.co/SYBp8mCX7O)! pic.twitter.com/pYYCn8RXlr
— Adam Lauretig (@lauretig) March 15, 2021
4. VDSM: Unsupervised Video Disentanglement with State-Space Modeling and Deep Mixtures of Experts
Matthew J. Vowels, Necati Cihan Camgoz, Richard Bowden
Disentangled representations support a range of downstream tasks including causal reasoning, generative modeling, and fair machine learning. Unfortunately, disentanglement has been shown to be impossible without the incorporation of supervision or inductive bias. Given that supervision is often expensive or infeasible to acquire, we choose to incorporate structural inductive bias and present an unsupervised, deep State-Space-Model for Video Disentanglement (VDSM). The model disentangles latent time-varying and dynamic factors via the incorporation of hierarchical structure with a dynamic prior and a Mixture of Experts decoder. VDSM learns separate disentangled representations for the identity of the object or person in the video, and for the action being performed. We evaluate VDSM across a range of qualitative and quantitative tasks including identity and dynamics transfer, sequence generation, Fr’echet Inception Distance, and factor classification. VDSM provides state-of-the-art performance and exceeds adversarial methods, even when the methods use additional supervision.
VDSM: Unsupervised Video Disentanglement with State-Space Modeling and Deep Mixtures of Experts
— AK (@ak92501) March 15, 2021
pdf: https://t.co/vWOgbT8BXC
abs: https://t.co/6nZyRIgWfi
github: https://t.co/5rIOG6XWhK pic.twitter.com/emdsFW2Dcr
5. Large Batch Simulation for Deep Reinforcement Learning
Brennan Shacklett, Erik Wijmans, Aleksei Petrenko, Manolis Savva, Dhruv Batra, Vladlen Koltun, Kayvon Fatahalian
We accelerate deep reinforcement learning-based training in visually complex 3D environments by two orders of magnitude over prior work, realizing end-to-end training speeds of over 19,000 frames of experience per second on a single GPU and up to 72,000 frames per second on a single eight-GPU machine. The key idea of our approach is to design a 3D renderer and embodied navigation simulator around the principle of “batch simulation”: accepting and executing large batches of requests simultaneously. Beyond exposing large amounts of work at once, batch simulation allows implementations to amortize in-memory storage of scene assets, rendering work, data loading, and synchronization costs across many simulation requests, dramatically improving the number of simulated agents per GPU and overall simulation throughput. To balance DNN inference and training costs with faster simulation, we also build a computationally efficient policy DNN that maintains high task performance, and modify training algorithms to maintain sample efficiency when training with large mini-batches. By combining batch simulation and DNN performance optimizations, we demonstrate that PointGoal navigation agents can be trained in complex 3D environments on a single GPU in 1.5 days to 97% of the accuracy of agents trained on a prior state-of-the-art system using a 64-GPU cluster over three days. We provide open-source reference implementations of our batch 3D renderer and simulator to facilitate incorporation of these ideas into RL systems.
Large Batch Simulation for Deep Reinforcement Learning
— AK (@ak92501) March 15, 2021
pdf: https://t.co/km7Es8HxIt
abs: https://t.co/tEwdQBnI8J
github: https://t.co/5CRJvEbqJy pic.twitter.com/hqbHkHwQ2a
6. Neural Reprojection Error: Merging Feature Learning and Camera Pose Estimation
Hugo Germain, Vincent Lepetit, Guillaume Bourmaud
Absolute camera pose estimation is usually addressed by sequentially solving two distinct subproblems: First a feature matching problem that seeks to establish putative 2D-3D correspondences, and then a Perspective-n-Point problem that minimizes, with respect to the camera pose, the sum of so-called Reprojection Errors (RE). We argue that generating putative 2D-3D correspondences 1) leads to an important loss of information that needs to be compensated as far as possible, within RE, through the choice of a robust loss and the tuning of its hyperparameters and 2) may lead to an RE that conveys erroneous data to the pose estimator. In this paper, we introduce the Neural Reprojection Error (NRE) as a substitute for RE. NRE allows to rethink the camera pose estimation problem by merging it with the feature learning problem, hence leveraging richer information than 2D-3D correspondences and eliminating the need for choosing a robust loss and its hyperparameters. Thus NRE can be used as training loss to learn image descriptors tailored for pose estimation. We also propose a coarse-to-fine optimization method able to very efficiently minimize a sum of NRE terms with respect to the camera pose. We experimentally demonstrate that NRE is a good substitute for RE as it significantly improves both the robustness and the accuracy of the camera pose estimate while being computationally and memory highly efficient. From a broader point of view, we believe this new way of merging deep learning and 3D geometry may be useful in other computer vision applications.
Neural Reprojection Error: Merging Feature Learning and Camera Pose Estimation
— Dmytro Mishkin (@ducha_aiki) March 15, 2021
Hugo Germain, Vincent Lepetit, Guillaume Bourmaudhttps://t.co/GHl0pLtLRk
Idea: require dense descriptor similarly match "reprojection" probability. I.e. small blob where 3D point lies in 2D image pic.twitter.com/42RN9EVkwz
7. Latent Space Explorations of Singing Voice Synthesis using DDSP
Juan Alonso, Cumhur Erkut
Machine learning based singing voice models require large datasets and lengthy training times. In this work we present a lightweight architecture, based on the Differentiable Digital Signal Processing (DDSP) library, that is able to output song-like utterances conditioned only on pitch and amplitude, after twelve hours of training using small datasets of unprocessed audio. The results are promising, as both the melody and the singer’s voice are recognizable. In addition, we present two zero-configuration tools to train new models and experiment with them. Currently we are exploring the latent space representation, which is included in the DDSP library, but not in the original DDSP examples. Our results indicate that the latent space improves both the identification of the singer as well as the comprehension of the lyrics. Our code is available at https://github.com/juanalonso/DDSP-singing-experiments with links to the zero-configuration notebooks, and our sound examples are at https://juanalonso.github.io/DDSP-singing-experiments/ .
Latent Space Explorations of Singing Voice Synthesis using DDSP
— AK (@ak92501) March 15, 2021
pdf: https://t.co/mJCyCSc3Dm
abs: https://t.co/7T1igxWAAk
github: https://t.co/5yYxpaykGx pic.twitter.com/roZEFENUko
8. Vision Transformer for COVID-19 CXR Diagnosis using Chest X-ray Feature Corpus
Sangjoon Park, Gwanghyun Kim, Yujin Oh, Joon Beom Seo, Sang Min Lee, Jin Hwan Kim, Sungjun Moon, Jae-Kwang Lim, Jong Chul Ye
Under the global COVID-19 crisis, developing robust diagnosis algorithm for COVID-19 using CXR is hampered by the lack of the well-curated COVID-19 data set, although CXR data with other disease are abundant. This situation is suitable for vision transformer architecture that can exploit the abundant unlabeled data using pre-training. However, the direct use of existing vision transformer that uses the corpus generated by the ResNet is not optimal for correct feature embedding. To mitigate this problem, we propose a novel vision Transformer by using the low-level CXR feature corpus that are obtained to extract the abnormal CXR features. Specifically, the backbone network is trained using large public datasets to obtain the abnormal features in routine diagnosis such as consolidation, glass-grass opacity (GGO), etc. Then, the embedded features from the backbone network are used as corpus for vision transformer training. We examine our model on various external test datasets acquired from totally different institutions to assess the generalization ability. Our experiments demonstrate that our method achieved the state-of-art performance and has better generalization capability, which are crucial for a widespread deployment.
Vision Transformer for COVID-19 CXR Diagnosis using Chest X-ray Feature Corpus
— AK (@ak92501) March 15, 2021
pdf: https://t.co/PO6626eHdV
abs: https://t.co/vPAb4PcaLi pic.twitter.com/cTPd135yMP
9. Searching by Generating: Flexible and Efficient One-Shot NAS with Architecture Generator
Sian-Yao Huang, Wei-Ta Chu
In one-shot NAS, sub-networks need to be searched from the supernet to meet different hardware constraints. However, the search cost is high and times of searches are needed for different constraints. In this work, we propose a novel search strategy called architecture generator to search sub-networks by generating them, so that the search process can be much more efficient and flexible. With the trained architecture generator, given target hardware constraints as the input, good architectures can be generated for constraints by just one forward pass without re-searching and supernet retraining. Moreover, we propose a novel single-path supernet, called unified supernet, to further improve search efficiency and reduce GPU memory consumption of the architecture generator. With the architecture generator and the unified supernet, we propose a flexible and efficient one-shot NAS framework, called Searching by Generating NAS (SGNAS). With the pre-trained supernt, the search time of SGNAS for different hardware constraints is only 5 GPU hours, which is times faster than previous SOTA single-path methods. After training from scratch, the top1-accuracy of SGNAS on ImageNet is 77.1%, which is comparable with the SOTAs. The code is available at: https://github.com/eric8607242/SGNAS.
10. Preregistering NLP Research
Emiel van Miltenburg, Chris van der Lee, Emiel Krahmer
Preregistration refers to the practice of specifying what you are going to do, and what you expect to find in your study, before carrying out the study. This practice is increasingly common in medicine and psychology, but is rarely discussed in NLP. This paper discusses preregistration in more detail, explores how NLP researchers could preregister their work, and presents several preregistration questions for different kinds of studies. Finally, we argue in favour of registered reports, which could provide firmer grounds for slow science in NLP research. The goal of this paper is to elicit a discussion in the NLP community, which we hope to synthesise into a general NLP preregistration form in future research.
Our paper on preregistering #NLProc research is now on ArXiv: https://t.co/dZ75GjbXa7
— Emiel van Miltenburg (@evanmiltenburg) March 15, 2021
I usually don't share work in progress, because then multiple versions of the paper will be floating around, but for this paper I'd really appreciate feedback before it appears at NAACL.