1. Causal Discovery in Physical Systems from Videos
Yunzhu Li, Antonio Torralba, Animashree Anandkumar, Dieter Fox, Animesh Garg
Causal discovery is at the core of human cognition. It enables us to reason about the environment and make counterfactual predictions about unseen scenarios, that can vastly differ from our previous experiences. We consider the task of causal discovery from videos in an end-to-end fashion without supervision on the ground-truth graph structure. In particular, our goal is to discover the structural dependencies among environmental and object variables: inferring the type and strength of interactions that have a causal effect on the behavior of the dynamical system. Our model consists of (a) a perception module that extracts a semantically meaningful and temporally consistent keypoint representation from images, (b) an inference module for determining the graph distribution induced by the detected keypoints, and (c) a dynamics module that can predict the future by conditioning on the inferred graph. We assume access to different configurations and environmental conditions, i.e., data from unknown interventions on the underlying system; thus, we can hope to discover the correct underlying causal graph without explicit interventions. We evaluate our method in a planar multi-body interaction environment and scenarios involving fabrics of different shapes like shirts and pants. Experiments demonstrate that our model can correctly identify the interactions from a short sequence of images and make long-term future predictions. The causal structure assumed by the model also allows it to make counterfactual predictions and extrapolate to systems of unseen interaction graphs or graphs of various sizes.
Excited to share our work on "Causal Discovery in Physical Systems from Videos" from my internship at @NVIDIAAI
— Yunzhu Li (@YunzhuLiYZ) July 2, 2020
Paper https://t.co/9MY2lSKlCS
Website https://t.co/dFXqbhhPOZ
Thanks to my amazing collaborators!@animesh_garg, @AnimaAnandkumar, Dieter Fox, Antonio Torralba
1/7 pic.twitter.com/Gc28f2mz6h
Learning Causal Graphs that capture Physical Systems has high potential yet challenging!
— Animesh Garg (@animesh_garg) July 2, 2020
Check out End-to-End Causal Discovery from videos
Site: https://t.co/YriS5oXZXm
Paper: https://t.co/QoC7njUpVa
w\ @YunzhuLiYZ @AnimaAnandkumar, A.Torralba, D. Fox pic.twitter.com/n0mIJCOVZU
Causal Discovery in Physical Systems from Videos
— roadrunner01 (@ak92501) July 2, 2020
pdf: https://t.co/8oKB4DsSKq
abs: https://t.co/yXnOELnD7s
project page: https://t.co/dKkBKDaiFN pic.twitter.com/SpzZKZlXbr
2. Similarity Search for Efficient Active Learning and Search of Rare Concepts
Cody Coleman, Edward Chou, Sean Culatana, Peter Bailis, Alexander C. Berg, Roshan Sumbaly, Matei Zaharia, I. Zeki Yalniz
Many active learning and search approaches are intractable for industrial settings with billions of unlabeled examples. Existing approaches, such as uncertainty sampling or information density, search globally for the optimal examples to label, scaling linearly or even quadratically with the unlabeled data. However, in practice, data is often heavily skewed; only a small fraction of collected data will be relevant for a given learning task. For example, when identifying rare classes, detecting malicious content, or debugging model performance, the ratio of positive to negative examples can be 1 to 1,000 or more. In this work, we exploit this skew in large training datasets to reduce the number of unlabeled examples considered in each selection round by only looking at the nearest neighbors to the labeled examples. Empirically, we observe that learned representations effectively cluster unseen concepts, making active learning very effective and substantially reducing the number of viable unlabeled examples. We evaluate several active learning and search techniques in this setting on three large-scale datasets: ImageNet, Goodreads spoiler detection, and OpenImages. For rare classes, active learning methods need as little as 0.31% of the labeled data to match the average precision of full supervision. By limiting active learning methods to only consider the immediate neighbors of the labeled data as candidates for labeling, we need only process as little as 1% of the unlabeled data while achieving similar reductions in labeling costs as the traditional global approach. This process of expanding the candidate pool with the nearest neighbors of the labeled set can be done efficiently and reduces the computational complexity of selection by orders of magnitude.
Can active learning scale to millions (potentially billions) of examples? Yes! We propose Similarity search for Efficient Active Learning and Search (SEALS) to restrict the candidates considered in each round and vastly reduce the computational complexity: https://t.co/wnCHMzXege
— Cody Coleman (@codyaustun) July 2, 2020
3. Debiased Contrastive Learning
Ching-Yao Chuang, Joshua Robinson, Lin Yen-Chen, Antonio Torralba, Stefanie Jegelka
A prominent technique for self-supervised representation learning has been to contrast semantically similar and dissimilar pairs of samples. Without access to labels, dissimilar (negative) points are typically taken to be randomly sampled datapoints, implicitly accepting that these points may, in reality, actually have the same label. Perhaps unsurprisingly, we observe that sampling negative examples from truly different labels improves performance, in a synthetic setting where labels are available. Motivated by this observation, we develop a debiased contrastive objective that corrects for the sampling of same-label datapoints, even without knowledge of the true labels. Empirically, the proposed objective consistently outperforms the state-of-the-art for representation learning in vision, language, and reinforcement learning benchmarks. Theoretically, we establish generalization bounds for the downstream classification task.
Excited to share our latest preprint on contrastive representation learning!
— Ching-Yao Chuang (@ChingYaoChuang) July 2, 2020
Debiased Contrastive Learning
paper: https://t.co/yDh0v64Pp8
code: https://t.co/Ng7s7Q05xq
w Joshua Robinson, @yen_chen_lin, Antonio Torralba, & Stefanie Jegelka pic.twitter.com/wxS3hOTWNX
4. Emergence of polarized ideological opinions in multidimensional topic spaces
Fabian Baumann, Philipp Lorenz-Spreen, Igor M. Sokolov, Michele Starnini
- retweets: 44, favorites: 102 (07/03/2020 15:24:11)
- links: abs | pdf
- physics.soc-ph | cs.SI
Opinion polarization is on the rise, causing concerns for the openness of public debates. Additionally, extreme opinions on different topics often show significant correlations. The dynamics leading to these polarized ideological opinions pose a challenge: How can such correlations emerge, without assuming them a priori in the individual preferences or in a preexisting social structure? Here we propose a simple model that reproduces ideological opinion states found in survey data, even between rather unrelated, but sufficiently controversial, topics. Inspired by skew coordinate systems recently proposed in natural language processing models, we solidify these intuitions in a formalism where opinions evolve in a multidimensional space where topics form a non-orthogonal basis. The model features a phase transition between consensus, opinion polarization, and ideological states, which we analytically characterize as a function of the controversialness and overlap of the topics. Our findings shed light upon the mechanisms driving the emergence of ideology in the formation of opinions.
Last out! @electionstudies survey data shows that extreme opinions wrt different topics can be correlated. We propose a model where these polarized ideological opinions emerge, without assuming apriori such correlations or preexisting social structures 1/3https://t.co/vqxupKYhif pic.twitter.com/SRkh0TNfoS
— Michele Starnini (@m_starnini) July 2, 2020
5. Deep Geometric Texture Synthesis
Amir Hertz, Rana Hanocka, Raja Giryes, Daniel Cohen-Or
Recently, deep generative adversarial networks for image generation have advanced rapidly; yet, only a small amount of research has focused on generative models for irregular structures, particularly meshes. Nonetheless, mesh generation and synthesis remains a fundamental topic in computer graphics. In this work, we propose a novel framework for synthesizing geometric textures. It learns geometric texture statistics from local neighborhoods (i.e., local triangular patches) of a single reference 3D model. It learns deep features on the faces of the input triangulation, which is used to subdivide and generate offsets across multiple scales, without parameterization of the reference or target mesh. Our network displaces mesh vertices in any direction (i.e., in the normal and tangential direction), enabling synthesis of geometric textures, which cannot be expressed by a simple 2D displacement map. Learning and synthesizing on local geometric patches enables a genus-oblivious framework, facilitating texture transfer between shapes of different genus.
Deep Geometric Texture Synthesis
— roadrunner01 (@ak92501) July 2, 2020
pdf: https://t.co/oZyaDzxlu3
abs: https://t.co/w2aIlSg93G pic.twitter.com/6TmEFy6unN
6. Swapping Autoencoder for Deep Image Manipulation
Taesung Park, Jun-Yan Zhu, Oliver Wang, Jingwan Lu, Eli Shechtman, Alexei A. Efros, Richard Zhang
Deep generative models have become increasingly effective at producing realistic images from randomly sampled seeds, but using such models for controllable manipulation of existing images remains challenging. We propose the Swapping Autoencoder, a deep model designed specifically for image manipulation, rather than random sampling. The key idea is to encode an image with two independent components and enforce that any swapped combination maps to a realistic image. In particular, we encourage the components to represent structure and texture, by enforcing one component to encode co-occurrent patch statistics across different parts of an image. As our method is trained with an encoder, finding the latent codes for a new input image becomes trivial, rather than cumbersome. As a result, it can be used to manipulate real input images in various ways, including texture swapping, local and global editing, and latent code vector arithmetic. Experiments on multiple datasets show that our model produces better results and is substantially more efficient compared to recent generative models.
Swapping Autoencoder for Deep Image Manipulation
— roadrunner01 (@ak92501) July 2, 2020
pdf: https://t.co/ymWzqolF99
abs: https://t.co/8nBm22jOOS
project page: https://t.co/jhfFwb9VyH
video: https://t.co/278pF01UUI pic.twitter.com/AVqTheajr9
7. Adaptive Procedural Task Generation for Hard-Exploration Problems
Kuan Fang, Yuke Zhu, Silvio Savarese, Li Fei-Fei
We introduce Adaptive Procedural Task Generation (APT-Gen), an approach for progressively generating a sequence of tasks as curricula to facilitate reinforcement learning in hard-exploration problems. At the heart of our approach, a task generator learns to create tasks via a black-box procedural generation module by adaptively sampling from the parameterized task space. To enable curriculum learning in the absence of a direct indicator of learning progress, the task generator is trained by balancing the agent’s expected return in the generated tasks and their similarities to the target task. Through adversarial training, the similarity between the generated tasks and the target task is adaptively estimated by a task discriminator defined on the agent’s behaviors. In this way, our approach can efficiently generate tasks of rich variations for target tasks of unknown parameterization or not covered by the predefined task space. Experiments demonstrate the effectiveness of our approach through quantitative and qualitative analysis in various scenarios.
We introduce APT-Gen to procedurally generate tasks of rich variations as curricula for reinforcement learning in hard-exploration problems.
— Kuan Fang (@KuanFang) July 2, 2020
Webpage: https://t.co/hRvlVHXStR
Paper: https://t.co/24MWtkVxtL
w/ @yukez @silviocinguetta @drfeifei pic.twitter.com/vNGDsF87ex
8. RE-MIMO: Recurrent and Permutation Equivariant Neural MIMO Detection
Kumar Pratik, Bhaskar D. Rao, Max Welling
In this paper, we present a novel neural network for MIMO symbol detection. It is motivated by several important considerations in wireless communication systems; permutation equivariance and a variable number of users. The neural detector learns an iterative decoding algorithm that is implemented as a stack of iterative units. Each iterative unit is a neural computation module comprising of 3 sub-modules: the likelihood module, the encoder module, and the predictor module. The likelihood module injects information about the generative (forward) process into the neural network. The encoder-predictor modules together update the state vector and symbol estimates. The encoder module updates the state vector and employs a transformer based attention network to handle the interactions among the users in a permutation equivariant manner. The predictor module refines the symbol estimates. The modular and permutation equivariant architecture allows for dealing with a varying number of users. The resulting neural detector architecture is unique and exhibits several desirable properties unseen in any of the previously proposed neural detectors. We compare its performance against existing methods and the results show the ability of our network to efficiently handle a variable number of transmitters with high accuracy.
This was nice project I did with Pratik Kumar (MSc student UvA [!]) and Bhaskar Rao (UCSD). Pratik combined transformers and Recurrent Inference Machines to do inference in massive MIMO systems in a user-permutation equivariant model. Great work Pratik! https://t.co/FUGjuh48Xm
— Max Welling (@wellingmax) July 2, 2020