1. On Random Matrices Arising in Deep Neural Networks. Gaussian Case
Leonid Pastur
The paper deals with distribution of singular values of product of random matrices arising in the analysis of deep neural networks. The matrices resemble the product analogs of the sample covariance matrices, however, an important difference is that the population covariance matrices, which are assumed to be non-random in the standard setting of statistics and random matrix theory, are now random, moreover, are certain functions of random data matrices. The problem has been considered in recent work [21] by using the techniques of free probability theory. Since, however, free probability theory deals with population matrices which are independent of the data matrices, its applicability in this case requires an additional justification. We present this justification by using a version of the standard techniques of random matrix theory under the assumption that the entries of data matrices are independent Gaussian random variables. In the subsequent paper [18] we extend our results to the case where the entries of data matrices are just independent identically distributed random variables with several finite moments. This, in particular, extends the property of the so-called macroscopic universality on the considered random matrices.
TIL that Leonid Pastur (of the Marchenko-Pastur fame) not only still actively publishes (at 83), but even wrote a substantial paper, quite recently, of interest to people in deep learning https://t.co/0YuvJGecKD pic.twitter.com/OiB5ufc896
— Shubhendu Trivedi (@_onionesque) November 23, 2020
2. Neural Scene Graphs for Dynamic Scenes
Julian Ost, Fahim Mannan, Nils Thuerey, Julian Knodt, Felix Heide
Recent implicit neural rendering methods have demonstrated that it is possible to learn accurate view synthesis for complex scenes by predicting their volumetric density and color supervised solely by a set of RGB images. However, existing methods are restricted to learning efficient interpolations of static scenes that encode all scene objects into a single neural network, lacking the ability to represent dynamic scenes and decompositions into individual scene objects. In this work, we present the first neural rendering method that decomposes dynamic scenes into scene graphs. We propose a learned scene graph representation, which encodes object transformation and radiance, to efficiently render novel arrangements and views of the scene. To this end, we learn implicitly encoded scenes, combined with a jointly learned latent representation to describe objects with a single implicit function. We assess the proposed method on synthetic and real automotive data, validating that our approach learns dynamic scenes - only by observing a video of this scene - and allows for rendering novel photo-realistic views of novel scene compositions with unseen sets of objects at unseen poses.
Neural Scene Graphs for Dynamic Scenes
— AK (@ak92501) November 23, 2020
pdf: https://t.co/0vtzByWQNP
abs: https://t.co/HhLSqp8kKj pic.twitter.com/qGMMMBcB5I
3. Dual Contradistinctive Generative Autoencoder
Gaurav Parmar, Dacheng Li, Kwonjoon Lee, Zhuowen Tu
We present a new generative autoencoder model with dual contradistinctive losses to improve generative autoencoder that performs simultaneous inference (reconstruction) and synthesis (sampling). Our model, named dual contradistinctive generative autoencoder (DC-VAE), integrates an instance-level discriminative loss (maintaining the instance-level fidelity for the reconstruction/synthesis) with a set-level adversarial loss (encouraging the set-level fidelity for there construction/synthesis), both being contradistinctive. Extensive experimental results by DC-VAE across different resolutions including 32x32, 64x64, 128x128, and 512x512 are reported. The two contradistinctive losses in VAE work harmoniously in DC-VAE leading to a significant qualitative and quantitative performance enhancement over the baseline VAEs without architectural changes. State-of-the-art or competitive results among generative autoencoders for image reconstruction, image synthesis, image interpolation, and representation learning are observed. DC-VAE is a general-purpose VAE model, applicable to a wide variety of downstream tasks in computer vision and machine learning.
Dual Contradistinctive Generative Autoencoder
— AK (@ak92501) November 23, 2020
pdf: https://t.co/1tE9nmWI8V
abs: https://t.co/X9tvPyXqkP pic.twitter.com/V7IBXXKdGJ
4. On barren plateaus and cost function locality in variational quantum algorithms
Alexey Uvarov, Jacob Biamonte
- retweets: 20, favorites: 42 (11/24/2020 11:51:05)
- links: abs | pdf
- quant-ph | cond-mat.dis-nn | cs.LG
Variational quantum algorithms rely on gradient based optimization to iteratively minimize a cost function evaluated by measuring output(s) of a quantum processor. A barren plateau is the phenomenon of exponentially vanishing gradients in sufficiently expressive parametrized quantum circuits. It has been established that the onset of a barren plateau regime depends on the cost function, although the particular behavior has been demonstrated only for certain classes of cost functions. Here we derive a lower bound on the variance of the gradient, which depends mainly on the width of the circuit causal cone of each term in the Pauli decomposition of the cost function. Our result further clarifies the conditions under which barren plateaus can occur.
On barren plateaus and cost function locality in variational quantum algorithmshttps://t.co/mRp099RrLB
— Jacob D Biamonte (@JacobBiamonte) November 23, 2020