1. Robust Consistent Video Depth Estimation
Johannes Kopf, Xuejian Rong, Jia-Bin Huang
We present an algorithm for estimating consistent dense depth maps and camera poses from a monocular video. We integrate a learning-based depth prior, in the form of a convolutional neural network trained for single-image depth estimation, with geometric optimization, to estimate a smooth camera trajectory as well as detailed and stable depth reconstruction. Our algorithm combines two complementary techniques: (1) flexible deformation-splines for low-frequency large-scale alignment and (2) geometry-aware depth filtering for high-frequency alignment of fine depth details. In contrast to prior approaches, our method does not require camera poses as input and achieves robust reconstruction for challenging hand-held cell phone captures containing a significant amount of noise, shake, motion blur, and rolling shutter deformations. Our method quantitatively outperforms state-of-the-arts on the Sintel benchmark for both depth and pose estimations and attains favorable qualitative results across diverse wild datasets.
Our latest work on making Consistent Video Depth more ROBUST. Works great for casual phone videos that are really difficult for previous methods.
— Johannes Kopf (@JPKopf) December 11, 2020
Another great collaboration with @jastarex and @jbhuang0604.
arXiv: https://t.co/pVuruyAIt9
Project: https://t.co/Lvi3Ls5wzm pic.twitter.com/GBbwcyuCIS
Robust Consistent Video Depth Estimation
— AK (@ak92501) December 11, 2020
pdf: https://t.co/C1uuNXkhq3
abs: https://t.co/qweQrDAL0m
project page: https://t.co/GIaXHm3aD1 pic.twitter.com/M9EPJkfyX9
2. Utilising Graph Machine Learning within Drug Discovery and Development
Thomas Gaudelet, Ben Day, Arian R. Jamasb, Jyothish Soman, Cristian Regep, Gertrude Liu, Jeremy B. R. Hayter, Richard Vickers, Charles Roberts, Jian Tang, David Roblin, Tom L. Blundell, Michael M. Bronstein, Jake P. Taylor-King
Graph Machine Learning (GML) is receiving growing interest within the pharmaceutical and biotechnology industries for its ability to model biomolecular structures, the functional relationships between them, and integrate multi-omic datasets - amongst other data types. Herein, we present a multidisciplinary academic-industrial review of the topic within the context of drug discovery and development. After introducing key terms and modelling approaches, we move chronologically through the drug development pipeline to identify and summarise work incorporating: target identification, design of small molecules and biologics, and drug repurposing. Whilst the field is still emerging, key milestones including repurposed drugs entering in vivo studies, suggest graph machine learning will become a modelling framework of choice within biomedical machine learning.
Delighted to share our review on Graph Machine Learning in Drug Discovery! W/ @RelationRx @thomasgaudelet @itsmebenday @jyo_soman @tangjianpku @mmbronstein @jptktech and othershttps://t.co/CrfL2l3pyl pic.twitter.com/kAPrW1PxWI
— Arian Jamasb (@arian_jamasb) December 11, 2020
Very pleased to announce our review: "Utilising Graph Machine Learning within Drug Discovery and Development". Fantastic effort across a group of multidisciplinary researchers, clinicians and drug developers!
— Relation Therapeutics (@RelationRx) December 11, 2020
Preprint here: https://t.co/oG4h2C5JMw pic.twitter.com/nzbyG9RGyV
3. Algorithmic risk assessments can alter human decision-making processes in high-stakes government contexts
Ben Green, Yiling Chen
Governments are increasingly turning to algorithmic risk assessments when making important decisions, believing that these algorithms will improve public servants’ ability to make policy-relevant predictions and thereby lead to more informed decisions. Yet because many policy decisions require balancing risk-minimization with competing social goals, evaluating the impacts of risk assessments requires considering how public servants are influenced by risk assessments when making policy decisions rather than just how accurately these algorithms make predictions. Through an online experiment with 2,140 lay participants simulating two high-stakes government contexts, we provide the first large-scale evidence that risk assessments can systematically alter decision-making processes by increasing the salience of risk as a factor in decisions and that these shifts could exacerbate racial disparities. These results demonstrate that improving human prediction accuracy with algorithms does not necessarily improve human decisions and highlight the need to experimentally test how government algorithms are used by human decision-makers.
Risk assessments are intended to help public servants make better predictions. But in a new preprint, Yiling Chen and I show that rather than improving human decisions, risk assessments can make risk more salient and increase racial disparities.
— Ben Green (@benzevgreen) December 11, 2020
Link: https://t.co/CLsEsXCHuq pic.twitter.com/OqVf1fYBZQ
4. iNeRF: Inverting Neural Radiance Fields for Pose Estimation
Lin Yen-Chen, Pete Florence, Jonathan T. Barron, Alberto Rodriguez, Phillip Isola, Tsung-Yi Lin
We present iNeRF, a framework that performs pose estimation by “inverting” a trained Neural Radiance Field (NeRF). NeRFs have been shown to be remarkably effective for the task of view synthesis - synthesizing photorealistic novel views of real-world scenes or objects. In this work, we investigate whether we can apply analysis-by-synthesis with NeRF for 6DoF pose estimation - given an image, find the translation and rotation of a camera relative to a 3D model. Starting from an initial pose estimate, we use gradient descent to minimize the residual between pixels rendered from an already-trained NeRF and pixels in an observed image. In our experiments, we first study 1) how to sample rays during pose refinement for iNeRF to collect informative gradients and 2) how different batch sizes of rays affect iNeRF on a synthetic dataset. We then show that for complex real-world scenes from the LLFF dataset, iNeRF can improve NeRF by estimating the camera poses of novel images and using these images as additional training data for NeRF. Finally, we show iNeRF can be combined with feature-based pose initialization. The approach outperforms all other RGB-based methods relying on synthetic data on LineMOD.
Lin Yen-Chen, Pete Florence, Jonathan T. Barron, Alberto Rodriguez, Phillip Isola, Tsung-Yi Lin, iNeRF: Inverting Neural Radiance Fields for Pose Estimation, arXiv, 2020
— Kosta Derpanis (@CSProfKGD) December 11, 2020
Paper: https://t.co/ic4S3WBohc
Project page: https://t.co/RUl7PO6yAv pic.twitter.com/hOQeTiayep
iNeRF: Inverting Neural Radiance Fields for Pose Estimation
— AK (@ak92501) December 11, 2020
pdf: https://t.co/ISdJjvG8FH
abs: https://t.co/8Yd4aYiICg
project page: https://t.co/Hspnds0iaf pic.twitter.com/QZ3kuyNeak
5. Neurosymbolic AI: The 3rd Wave
Artur d’Avila Garcez, Luis C. Lamb
Current advances in Artificial Intelligence (AI) and Machine Learning (ML) have achieved unprecedented impact across research communities and industry. Nevertheless, concerns about trust, safety, interpretability and accountability of AI were raised by influential thinkers. Many have identified the need for well-founded knowledge representation and reasoning to be integrated with deep learning and for sound explainability. Neural-symbolic computing has been an active area of research for many years seeking to bring together robust learning in neural networks with reasoning and explainability via symbolic representations for network models. In this paper, we relate recent and early research results in neurosymbolic AI with the objective of identifying the key ingredients of the next wave of AI systems. We focus on research that integrates in a principled way neural network-based learning with symbolic knowledge representation and logical reasoning. The insights provided by 20 years of neural-symbolic computing are shown to shed new light onto the increasingly prominent role of trust, safety, interpretability and accountability of AI. We also identify promising directions and challenges for the next decade of AI research from the perspective of neural-symbolic systems.
Neurosymbolic AI: The 3rd Wave with @AvilaGarcez now on @arxiv https://t.co/EhmwnS4VyU; thanks to @frossi_t @GaryMarcus @pascalhitzler @geoffreyhinton @ceobillionaire @kahneman_daniel @vardi @serafiniluc @pascalhitzler @Melleo54Sis @jeublanc @kerstingAIML & many others. pic.twitter.com/FPHpvpKf5h
— Luis Lamb (@luislamb) December 11, 2020
And very much related to Neurosymbolic AI: the 3rd wave https://t.co/cL9EDznNag, I am looking forward to #NeurIPS2020 workshop on KR meets ML and debate @frossi_t @DanielKahneman @luislamb @GaryMarcus @MITCoCoSci @HenryKautz TODAYhttps://t.co/kJ4EZ7wVWM
— Artur d'Avila Garcez (@AvilaGarcez) December 11, 2020
6. Portrait Neural Radiance Fields from a Single Image
Chen Gao, Yichang Shih, Wei-Sheng Lai, Chia-Kai Liang, Jia-Bin Huang
We present a method for estimating Neural Radiance Fields (NeRF) from a single headshot portrait. While NeRF has demonstrated high-quality view synthesis, it requires multiple images of static scenes and thus impractical for casual captures and moving subjects. In this work, we propose to pretrain the weights of a multilayer perceptron (MLP), which implicitly models the volumetric density and colors, with a meta-learning framework using a light stage portrait dataset. To improve the generalization to unseen faces, we train the MLP in the canonical coordinate space approximated by 3D face morphable models. We quantitatively evaluate the method using controlled captures and demonstrate the generalization to real portrait images, showing favorable results against state-of-the-arts.
Portrait Neural Radiance Fields from a Single Image
— AK (@ak92501) December 11, 2020
pdf: https://t.co/80TKKFosQM
abs: https://t.co/nq58439QPJ
project page: https://t.co/Y9aNIbBCH3 pic.twitter.com/3jcMXJbbjw
7. Data and its (dis)contents: A survey of dataset development and use in machine learning research
Amandalynne Paullada, Inioluwa Deborah Raji, Emily M. Bender, Emily Denton, Alex Hanna
Datasets have played a foundational role in the advancement of machine learning research. They form the basis for the models we design and deploy, as well as our primary medium for benchmarking and evaluation. Furthermore, the ways in which we collect, construct and share these datasets inform the kinds of problems the field pursues and the methods explored in algorithm development. However, recent work from a breadth of perspectives has revealed the limitations of predominant practices in dataset collection and use. In this paper, we survey the many concerns raised about the way we collect and use data in machine learning and advocate that a more cautious and thorough understanding of data is necessary to address several of the practical and ethical issues of the field.
Read the latest version of the paper here: https://t.co/YFhD4eRx4i
— Amandalynne Paullada (@amandalynneP) December 11, 2020
Thank you!
Nice paper by Paullada et al.: Data and its (dis)contents: A survey of dataset development and use in machine learning research.
— Emiel van Miltenburg (@evanmiltenburg) December 11, 2020
If you’re planning to build a dataset for #nlproc or #ml/#ai then you should probably read this critical survey.
URL: https://t.co/z0HFv8kB4P
8. GAN Steerability without optimization
Nurit Spingarn-Eliezer, Ron Banner, Tomer Michaeli
Recent research has shown remarkable success in revealing “steering” directions in the latent spaces of pre-trained GANs. These directions correspond to semantically meaningful image transformations e.g., shift, zoom, color manipulations), and have similar interpretable effects across all categories that the GAN can generate. Some methods focus on user-specified transformations, while others discover transformations in an unsupervised manner. However, all existing techniques rely on an optimization procedure to expose those directions, and offer no control over the degree of allowed interaction between different transformations. In this paper, we show that “steering” trajectories can be computed in closed form directly from the generator’s weights without any form of training or optimization. This applies to user-prescribed geometric transformations, as well as to unsupervised discovery of more complex effects. Our approach allows determining both linear and nonlinear trajectories, and has many advantages over previous methods. In particular, we can control whether one transformation is allowed to come on the expense of another (e.g. zoom-in with or without allowing translation to keep the object centered). Moreover, we can determine the natural end-point of the trajectory, which corresponds to the largest extent to which a transformation can be applied without incurring degradation. Finally, we show how transferring attributes between images can be achieved without optimization, even across different categories.
GAN Steerability without optimization
— AK (@ak92501) December 11, 2020
pdf: https://t.co/FVGhpib0fW
abs: https://t.co/Xx9FaWmVbH pic.twitter.com/pdxb4WtQGz
9. Synthesizing Long-Term 3D Human Motion and Interaction in 3D Scenes
Jiashun Wang, Huazhe Xu, Jingwei Xu, Sifei Liu, Xiaolong Wang
Synthesizing 3D human motion plays an important role in many graphics applications as well as understanding human activity. While many efforts have been made on generating realistic and natural human motion, most approaches neglect the importance of modeling human-scene interactions and affordance. On the other hand, affordance reasoning (e.g., standing on the floor or sitting on the chair) has mainly been studied with static human pose and gestures, and it has rarely been addressed with human motion. In this paper, we propose to bridge human motion synthesis and scene affordance reasoning. We present a hierarchical generative framework to synthesize long-term 3D human motion conditioning on the 3D scene structure. Building on this framework, we further enforce multiple geometry constraints between the human mesh and scene point clouds via optimization to improve realistic synthesis. Our experiments show significant improvements over previous approaches on generating natural and physically plausible human motion in a scene.
Synthesizing Long-Term 3D Human Motion and Interaction in 3D Scenes
— AK (@ak92501) December 11, 2020
pdf: https://t.co/QS31oVMSCw
abs: https://t.co/SB1gwWhWBy
project page: https://t.co/K0MMVYhZuN pic.twitter.com/83kJClFohD
10. Concept Generalization in Visual Representation Learning
Mert Bulent Sariyildiz, Yannis Kalantidis, Diane Larlus, Karteek Alahari
Measuring concept generalization, i.e., the extent to which models trained on a set of (seen) visual concepts can be used to recognize a new set of (unseen) concepts, is a popular way of evaluating visual representations, especially when they are learned with self-supervised learning. Nonetheless, the choice of which unseen concepts to use is usually made arbitrarily, and independently from the seen concepts used to train representations, thus ignoring any semantic relationships between the two. In this paper, we argue that semantic relationships between seen and unseen concepts affect generalization performance and propose ImageNet-CoG, a novel benchmark on the ImageNet dataset that enables measuring concept generalization in a principled way. Our benchmark leverages expert knowledge that comes from WordNet in order to define a sequence of unseen ImageNet concept sets that are semantically more and more distant from the ImageNet-1K subset, a ubiquitous training set. This allows us to benchmark visual representations learned on ImageNet-1K out-of-the box: we analyse a number of such models from supervised, semi-supervised and self-supervised approaches under the prism of concept generalization, and show how our benchmark is able to uncover a number of interesting insights. We will provide resources for the benchmark at https://europe.naverlabs.com/cog-benchmark.
Generalization is at the heart of representation learning; yet the impact of the *semantic relationship* between concepts seen during training and downstream datasets is unclear. In our recent work (https://t.co/fckLaoI4uQ), we propose a principled way of measuring exactly that. pic.twitter.com/OhNLV63GIx
— M.Bülent Sarıyıldız (@mbsariyildiz) December 11, 2020
11. Full-Glow: Fully conditional Glow for more realistic image generation
Moein Sorkhei, Gustav Eje Henter, Hedvig Kjellström
Autonomous agents, such as driverless cars, require large amounts of labeled visual data for their training. A viable approach for acquiring such data is training a generative model with collected real data, and then augmenting the collected real dataset with synthetic images from the model, generated with control of the scene layout and ground truth labeling. In this paper we propose Full-Glow, a fully conditional Glow-based architecture for generating plausible and realistic images of novel street scenes given a semantic segmentation map indicating the scene layout. Benchmark comparisons show our model to outperform recent works in terms of the semantic segmentation performance of a pretrained PSPNet. This indicates that images from our model are, to a higher degree than from other models, similar to real images of the same kinds of scenes and objects, making them suitable as training data for a visual semantic segmentation or object recognition system.
Full-Glow: Fully conditional Glow for more realistic image generation
— AK (@ak92501) December 11, 2020
pdf: https://t.co/0WCRMqpUhv
abs: https://t.co/WNg1DdAOXk
github: https://t.co/qw8W4FW3h3 pic.twitter.com/047ghkS3BQ
12. Enhancing Human Pose Estimation in Ancient Vase Paintings via Perceptually-grounded Style Transfer Learning
Prathmesh Madhu, Angel Villar-Corrales, Ronak Kosti, Torsten Bendschus, Corinna Reinhardt, Peter Bell, Andreas Maier, Vincent Christlein
Human pose estimation (HPE) is a central part of understanding the visual narration and body movements of characters depicted in artwork collections, such as Greek vase paintings. Unfortunately, existing HPE methods do not generalise well across domains resulting in poorly recognized poses. Therefore, we propose a two step approach: (1) adapting a dataset of natural images of known person and pose annotations to the style of Greek vase paintings by means of image style-transfer. We introduce a perceptually-grounded style transfer training to enforce perceptual consistency. Then, we fine-tune the base model with this newly created dataset. We show that using style-transfer learning significantly improves the SOTA performance on unlabelled data by more than 6% mean average precision (mAP) as well as mean average recall (mAR). (2) To improve the already strong results further, we created a small dataset (ClassArch) consisting of ancient Greek vase paintings from the 6-5th century BCE with person and pose annotations. We show that fine-tuning on this data with a style-transferred model improves the performance further. In a thorough ablation study, we give a targeted analysis of the influence of style intensities, revealing that the model learns generic domain styles. Additionally, we provide a pose-based image retrieval to demonstrate the effectiveness of our method.
Enhancing Human Pose Estimation in Ancient Vase Paintings via Perceptually-grounded Style Transfer Learning
— AK (@ak92501) December 11, 2020
pdf: https://t.co/s05UmioJBh
abs: https://t.co/3ONYdk2SlU pic.twitter.com/9Dt2oL3lK6
13. Flexible Few-Shot Learning with Contextual Similarity
Mengye Ren, Eleni Triantafillou, Kuan-Chieh Wang, James Lucas, Jake Snell, Xaq Pitkow, Andreas S. Tolias, Richard Zemel
Existing approaches to few-shot learning deal with tasks that have persistent, rigid notions of classes. Typically, the learner observes data only from a fixed number of classes at training time and is asked to generalize to a new set of classes at test time. Two examples from the same class would always be assigned the same labels in any episode. In this work, we consider a realistic setting where the similarities between examples can change from episode to episode depending on the task context, which is not given to the learner. We define new benchmark datasets for this flexible few-shot scenario, where the tasks are based on images of faces (Celeb-A), shoes (Zappos50K), and general objects (ImageNet-with-Attributes). While classification baselines and episodic approaches learn representations that work well for standard few-shot learning, they suffer in our flexible tasks as novel similarity definitions arise during testing. We propose to build upon recent contrastive unsupervised learning techniques and use a combination of instance and class invariance learning, aiming to obtain general and flexible features. We find that our approach performs strongly on our new flexible few-shot learning benchmarks, demonstrating that unsupervised learning obtains more generalizable representations.
Check out our new paper Flexible Few-Shot Learning -- the same object can belong to different classes depending on context. We found unsupervised representation is better than supervised. A short version at NeurIPS metalearn workshop today at 10 EST. https://t.co/fHMGMfcdRb pic.twitter.com/awqJH5NyTz
— Zemel Group (@zemelgroup) December 11, 2020
14. ViP-DeepLab: Learning Visual Perception with Depth-aware Video Panoptic Segmentation
Siyuan Qiao, Yukun Zhu, Hartwig Adam, Alan Yuille, Liang-Chieh Chen
In this paper, we present ViP-DeepLab, a unified model attempting to tackle the long-standing and challenging inverse projection problem in vision, which we model as restoring the point clouds from perspective image sequences while providing each point with instance-level semantic interpretations. Solving this problem requires the vision models to predict the spatial location, semantic class, and temporally consistent instance label for each 3D point. ViP-DeepLab approaches it by jointly performing monocular depth estimation and video panoptic segmentation. We name this joint task as Depth-aware Video Panoptic Segmentation, and propose a new evaluation metric along with two derived datasets for it, which will be made available to the public. On the individual sub-tasks, ViP-DeepLab also achieves state-of-the-art results, outperforming previous methods by 5.1% VPQ on Cityscapes-VPS, ranking 1st on the KITTI monocular depth estimation benchmark, and 1st on KITTI MOTS pedestrian. The datasets and the evaluation codes are made publicly available.