1. I2L-MeshNet: Image-to-Lixel Prediction Network for Accurate 3D Human Pose and Mesh Estimation from a Single RGB Image
Gyeongsik Moon, Kyoung Mu Lee
Most of the previous image-based 3D human pose and mesh estimation methods estimate parameters of the human mesh model from an input image. However, directly regressing the parameters from the input image is a highly non-linear mapping because it breaks the spatial relationship between pixels in the input image. In addition, it cannot model the prediction uncertainty, which can make training harder. To resolve the above issues, we propose I2L-MeshNet, an image-to-lixel (line+pixel) prediction network. The proposed I2L-MeshNet predicts the per-lixel likelihood on 1D heatmaps for each mesh vertex coordinate instead of directly regressing the parameters. Our lixel-based 1D heatmap preserves the spatial relationship in the input image and models the prediction uncertainty. We demonstrate the benefit of the image-to-lixel prediction and show that the proposed I2L-MeshNet outperforms previous methods. The code is publicly available \footnote{\url{https://github.com/mks0601/I2L-MeshNet_RELEASE}}.
I2L-MeshNet: Image-to-Lixel Prediction Network for Accurate 3D Human Pose and Mesh Estimation from a Single RGB Image
— AK (@ak92501) August 11, 2020
pdf: https://t.co/9WSyvFAbiX
abs: https://t.co/mOYhmhf1z3
github: https://t.co/9b50dl2nbj pic.twitter.com/MXsF99CcU1
2. Neural Light Transport for Relighting and View Synthesis
Xiuming Zhang, Sean Fanello, Yun-Ta Tsai, Tiancheng Sun, Tianfan Xue, Rohit Pandey, Sergio Orts-Escolano, Philip Davidson, Christoph Rhemann, Paul Debevec, Jonathan T. Barron, Ravi Ramamoorthi, William T. Freeman
The light transport (LT) of a scene describes how it appears under different lighting and viewing directions, and complete knowledge of a scene’s LT enables the synthesis of novel views under arbitrary lighting. In this paper, we focus on image-based LT acquisition, primarily for human bodies within a light stage setup. We propose a semi-parametric approach to learn a neural representation of LT that is embedded in the space of a texture atlas of known geometric properties, and model all non-diffuse and global LT as residuals added to a physically-accurate diffuse base rendering. In particular, we show how to fuse previously seen observations of illuminants and views to synthesize a new image of the same scene under a desired lighting condition from a chosen viewpoint. This strategy allows the network to learn complex material effects (such as subsurface scattering) and global illumination, while guaranteeing the physical correctness of the diffuse LT (such as hard shadows). With this learned LT, one can relight the scene photorealistically with a directional light or an HDRI map, synthesize novel views with view-dependent effects, or do both simultaneously, all in a unified framework using a set of sparse, previously seen observations. Qualitative and quantitative experiments demonstrate that our neural LT (NLT) outperforms state-of-the-art solutions for relighting and view synthesis, without separate treatment for both problems that prior work requires.
Neural Light Transport for Relighting and View Synthesis
— AK (@ak92501) August 11, 2020
pdf: https://t.co/uhqAVm3J7N
abs: https://t.co/vkpIekQ4gF
project page: https://t.co/mXvcUHw6Ue pic.twitter.com/FmmMcM7mDA
3. Robust Bayesian inference of network structure from unreliable data
Jean-Gabriel Young, George T. Cantwell, M. E. J. Newman
- retweets: 63, favorites: 209 (08/12/2020 10:10:58)
- links: abs | pdf
- cs.SI | physics.soc-ph | stat.AP
Most empirical studies of complex networks do not return direct, error-free measurements of network structure. Instead, they typically rely on indirect measurements that are often error-prone and unreliable. A fundamental problem in empirical network science is how to make the best possible estimates of network structure given such unreliable data. In this paper we describe a fully Bayesian method for reconstructing networks from observational data in any format, even when the data contain substantial measurement error and when the nature and magnitude of that error is unknown. The method is introduced through pedagogical case studies using real-world example networks, and specifically tailored to allow straightforward, computationally efficient implementation with a minimum of technical input. Computer code implementing the method is publicly available.
Introducing "Robust Bayesian inference of network structure from unreliable data," a (hopefully!) pedagogical introduction to inferring networks from noisy data -- with code.
— Jean-Gabriel Young (@_jgyou) August 11, 2020
w/ George T. Cantwell and MEJ Newman
📃Preprint: https://t.co/ne5jezRxuB pic.twitter.com/tqXufIbHwk
4. EagerPy: Writing Code That Works Natively with PyTorch, TensorFlow, JAX, and NumPy
Jonas Rauber, Matthias Bethge, Wieland Brendel
EagerPy is a Python framework that lets you write code that automatically works natively with PyTorch, TensorFlow, JAX, and NumPy. Library developers no longer need to choose between supporting just one of these frameworks or reimplementing the library for each framework and dealing with code duplication. Users of such libraries can more easily switch frameworks without being locked in by a specific 3rd party library. Beyond multi-framework support, EagerPy also brings comprehensive type annotations and consistent support for method chaining to any framework. The latest documentation is available online at https://eagerpy.jonasrauber.de and the code can be found on GitHub at https://github.com/jonasrauber/eagerpy.
"EagerPy: Writing Code That Works Natively with PyTorch, TensorFlow, JAX, and NumPy": https://t.co/7981r7nN5H Looks like a neat tool when collaborating with people who prefer a different Deep Learning lib, and you want to find a common denominator.
— Sebastian Raschka (@rasbt) August 11, 2020
5. Improving the Speed and Quality of GAN by Adversarial Training
Jiachen Zhong, Xuanqing Liu, Cho-Jui Hsieh
Generative adversarial networks (GAN) have shown remarkable results in image generation tasks. High fidelity class-conditional GAN methods often rely on stabilization techniques by constraining the global Lipschitz continuity. Such regularization leads to less expressive models and slower convergence speed; other techniques, such as the large batch training, require unconventional computing power and are not widely accessible. In this paper, we develop an efficient algorithm, namely FastGAN (Free AdverSarial Training), to improve the speed and quality of GAN training based on the adversarial training technique. We benchmark our method on CIFAR10, a subset of ImageNet, and the full ImageNet datasets. We choose strong baselines such as SNGAN and SAGAN; the results demonstrate that our training algorithm can achieve better generation quality (in terms of the Inception score and Frechet Inception distance) with less overall training time. Most notably, our training algorithm brings ImageNet training to the broader public by requiring 2-4 GPUs.
Improving the Speed and Quality of GAN by Adversarial
— AK (@ak92501) August 11, 2020
Training
pdf: https://t.co/j9Qlg8Mv3H
abs: https://t.co/3ySOiL5beQ pic.twitter.com/cQ8wtKkzGD
6. Deep Sketch-guided Cartoon Video Synthesis
Xiaoyu Li, Bo Zhang, Jing Liao, Pedro V. Sander
We propose a novel framework to produce cartoon videos by fetching the color information from two input keyframes while following the animated motion guided by a user sketch. The key idea of the proposed approach is to estimate the dense cross-domain correspondence between the sketch and cartoon video frames, following by a blending module with occlusion estimation to synthesize the middle frame guided by the sketch. After that, the inputs and the synthetic frame equipped with established correspondence are fed into an arbitrary-time frame interpolation pipeline to generate and refine additional inbetween frames. Finally, a video post-processing approach is used to further improve the result. Compared to common frame interpolation methods, our approach can address frames with relatively large motion and also has the flexibility to enable users to control the generated video sequences by editing the sketch guidance. By explicitly considering the correspondence between frames and the sketch, our methods can achieve high-quality synthetic results compared with image synthesis methods. Our results show that our system generalizes well to different movie frames, achieving better results than existing solutions.
Deep Sketch-guided Cartoon Video Synthesis
— AK (@ak92501) August 11, 2020
pdf: https://t.co/KrSbL0zsiN
abs: https://t.co/gQfHqfMCVY pic.twitter.com/VoXGslKIZe
7. The Chess Transformer: Mastering Play using Generative Language Models
David Noever, Matt Ciolino, Josh Kalin
This work demonstrates that natural language transformers can support more generic strategic modeling, particularly for text-archived games. In addition to learning natural language skills, the abstract transformer architecture can generate meaningful moves on a chessboard. With further fine-tuning, the transformer learns complex gameplay by training on 2.8 million chess games in Portable Game Notation. After 30,000 training steps, OpenAI’s Generative Pre-trained Transformer (GPT-2) optimizes weights for 774 million parameters. This fine-tuned Chess Transformer generates plausible strategies and displays game formations identifiable as classic openings, such as English or the Slav Exchange. Finally, in live play, the novel model demonstrates a human-to-transformer interface that correctly filters illegal moves and provides a novel method to challenge the transformer’s chess strategies. We anticipate future work will build on this transformer’s promise, particularly in other strategy games where features can capture the underlying complex rule syntax from simple but expressive player annotations.
The Chess Transformer: Mastering Play using Generative Language Models
— Shawn Presser (@theshawwn) August 11, 2020
pdf: https://t.co/1jCbVcApCg pic.twitter.com/hQqwFRfDVz
The Chess Transformer: Mastering Play using Generative Language Models
— AK (@ak92501) August 11, 2020
pdf: https://t.co/s7wsWenqGo
abs: https://t.co/C7FnlunhQH pic.twitter.com/80zd8Pyj58
8. Spatiotemporal Contrastive Video Representation Learning
Rui Qian, Tianjian Meng, Boqing Gong, Ming-Hsuan Yang, Huisheng Wang, Serge Belongie, Yin Cui
We present a self-supervised Contrastive Video Representation Learning (CVRL) method to learn spatiotemporal visual representations from unlabeled videos. Inspired by the recently proposed self-supervised contrastive learning framework, our representations are learned using a contrastive loss, where two clips from the same short video are pulled together in the embedding space, while clips from different videos are pushed away. We study what makes for good data augmentation for video self-supervised learning and find both spatial and temporal information are crucial. In particular, we propose a simple yet effective temporally consistent spatial augmentation method to impose strong spatial augmentations on each frame of a video clip while maintaining the temporal consistency across frames. For Kinetics-600 action recognition, a linear classifier trained on representations learned by CVRL achieves 64.1% top-1 accuracy with a 3D-ResNet50 backbone, outperforming ImageNet supervised pre-training by 9.4% and SimCLR unsupervised pre-training by 16.1% using the same inflated 3D-ResNet50. The performance of CVRL can be further improved to 68.2% with a larger 3D-ResNet50 (4) backbone, significantly closing the gap between unsupervised and supervised video representation learning.
Our new work: Spatiotemporal Contrastive Video Representation Learning (CVRL).
— Yin Cui (@YinCui1) August 11, 2020
On Kinetics-600, we achieve 64.1% top-1 linear classification accuracy with a 3D-ResNet50 backbone and 68.2% with a larger 3D-ResNet50 (4x) backbone.
Link: https://t.co/96avHJPFFB pic.twitter.com/vHJi6tDN1W
9. TriFinger: An Open-Source Robot for Learning Dexterity
Manuel Wüthrich, Felix Widmaier, Felix Grimminger, Joel Akpo, Shruti Joshi, Vaibhav Agrawal, Bilal Hammoud, Majid Khadiv, Miroslav Bogdanovic, Vincent Berenz, Julian Viereck, Maximilien Naveau, Ludovic Righetti, Bernhard Schölkopf, Stefan Bauer
Dexterous object manipulation remains an open problem in robotics, despite the rapid progress in machine learning during the past decade. We argue that a hindrance is the high cost of experimentation on real systems, in terms of both time and money. We address this problem by proposing an open-source robotic platform which can safely operate without human supervision. The hardware is inexpensive (about \SI{5000}[$]{}) yet highly dynamic, robust, and capable of complex interaction with external objects. The software operates at 1-kilohertz and performs safety checks to prevent the hardware from breaking. The easy-to-use front-end (in C++ and Python) is suitable for real-time control as well as deep reinforcement learning. In addition, the software framework is largely robot-agnostic and can hence be used independently of the hardware proposed herein. Finally, we illustrate the potential of the proposed platform through a number of experiments, including real-time optimal control, deep reinforcement learning from scratch, throwing, and writing.
TriFinger: An Open-Source Robot for Learning Dexterity https://t.co/PXLkKkbU9e pic.twitter.com/yjAJxl3wzq
— sim2real (@sim2realAIorg) August 11, 2020
10. VAW-GAN for Singing Voice Conversion with Non-parallel Training Data
Junchen Lu, Kun Zhou, Berrak Sisman, Haizhou Li
Singing voice conversion aims to convert singer’s voice from source to target without changing singing content. Parallel training data is typically required for the training of singing voice conversion system, that is however not practical in real-life applications. Recent encoder-decoder structures, such as variational autoencoding Wasserstein generative adversarial network (VAW-GAN), provide an effective way to learn a mapping through non-parallel training data. In this paper, we propose a singing voice conversion framework that is based on VAW-GAN. We train an encoder to disentangle singer identity and singing prosody (F0 contour) from phonetic content. By conditioning on singer identity and F0, the decoder generates output spectral features with unseen target singer identity, and improves the F0 rendering. Experimental results show that the proposed framework achieves better performance than the baseline frameworks.
Wasserstein Generative Adversarial Networks for Singing Voice Conversion
— AK (@ak92501) August 11, 2020
pdf: https://t.co/P8rqMSJ1HS
abs: https://t.co/uVdYTaKIkD
project page: https://t.co/coMbVkzj4L
github: https://t.co/UU6slqY9sy pic.twitter.com/sEUPOJIxVj
11. Two-branch Recurrent Network for Isolating Deepfakes in Videos
Iacopo Masi, Aditya Killekar, Royston Marian Mascarenhas, Shenoy Pratik Gurudatt, Wael AbdAlmageed
The current spike of hyper-realistic faces artificially generated using deepfakes calls for media forensics solutions that are tailored to video streams and work reliably with a low false alarm rate at the video level. We present a method for deepfake detection based on a two-branch network structure that isolates digitally manipulated faces by learning to amplify artifacts while suppressing the high-level face content. Unlike current methods that extract spatial frequencies as a preprocessing step, we propose a two-branch structure: one branch propagates the original information, while the other branch suppresses the face content yet amplifies multi-band frequencies using a Laplacian of Gaussian (LoG) as a bottleneck layer. To better isolate manipulated faces, we derive a novel cost function that, unlike regular classification, compresses the variability of natural faces and pushes away the unrealistic facial samples in the feature space. Our two novel components show promising results on the FaceForensics++, Celeb-DF, and Facebook’s DFDC preview benchmarks, when compared to prior work. We then offer a full, detailed ablation study of our network architecture and cost function. Finally, although the bar is still high to get very remarkable figures at a very low false alarm rate, our study shows that we can achieve good video-level performance when cross-testing in terms of video-level AUC.
Two-branch Recurrent Network for Isolating Deepfakes in Videos. #DataScience #BigData #IoT #Python #RStats #TensorFlow #Java #JavaScript #ReactJS #GoLang #Serverless #Linux #AI #Programming #DeepLearning #MachineLearning #Deepfakes #ArtificialIntelligencehttps://t.co/KOuq6oWcBT pic.twitter.com/0qkoqDYP6Y
— Marcus Borba (@marcusborba) August 11, 2020