1. Taming Transformers for High-Resolution Image Synthesis
Patrick Esser, Robin Rombach, Björn Ommer
Designed to learn long-range interactions on sequential data, transformers continue to show state-of-the-art results on a wide variety of tasks. In contrast to CNNs, they contain no inductive bias that prioritizes local interactions. This makes them expressive, but also computationally infeasible for long sequences, such as high-resolution images. We demonstrate how combining the effectiveness of the inductive bias of CNNs with the expressivity of transformers enables them to model and thereby synthesize high-resolution images. We show how to (i) use CNNs to learn a context-rich vocabulary of image constituents, and in turn (ii) utilize transformers to efficiently model their composition within high-resolution images. Our approach is readily applied to conditional synthesis tasks, where both non-spatial information, such as object classes, and spatial information, such as segmentations, can control the generated image. In particular, we present the first results on semantically-guided synthesis of megapixel images with transformers. Project page at https://compvis.github.io/taming-transformers/ .
Taming Transformers for High-Resolution Image Synthesis
— AK (@ak92501) December 18, 2020
pdf: https://t.co/fRwnXjKahS
abs: https://t.co/s9e42zZrrV
project page: https://t.co/aiA2PlSODq pic.twitter.com/emVvlP2vcg
2. Variational Quantum Algorithms
M. Cerezo, Andrew Arrasmith, Ryan Babbush, Simon C. Benjamin, Suguru Endo, Keisuke Fujii, Jarrod R. McClean, Kosuke Mitarai, Xiao Yuan, Lukasz Cincio, Patrick J. Coles
Applications such as simulating large quantum systems or solving large-scale linear algebra problems are immensely challenging for classical computers due their extremely high computational cost. Quantum computers promise to unlock these applications, although fault-tolerant quantum computers will likely not be available for several years. Currently available quantum devices have serious constraints, including limited qubit numbers and noise processes that limit circuit depth. Variational Quantum Algorithms (VQAs), which employ a classical optimizer to train a parametrized quantum circuit, have emerged as a leading strategy to address these constraints. VQAs have now been proposed for essentially all applications that researchers have envisioned for quantum computers, and they appear to the best hope for obtaining quantum advantage. Nevertheless, challenges remain including the trainability, accuracy, and efficiency of VQAs. In this review article we present an overview of the field of VQAs. Furthermore, we discuss strategies to overcome their challenges as well as the exciting prospects for using them as a means to obtain quantum advantage.
🔥Today on arXiv we bring you a Review on Variational Quantum Algorithms (VQAs) 🔥https://t.co/9ojsa3Cwlt
— Marco Cerezo (@MvsCerezo) December 18, 2020
We present the framework of VQAs, their applications, challenges and potential solutions, and how they could bring quantum advantage in the near term.
We hope our review article on Variational Quantum Algorithms is a helpful resource:https://t.co/DJ7tbBKUke
— Patrick Coles (@ColesQuantum) December 19, 2020
This was a worldwide multi-institutional collaboration (see thread for details). https://t.co/3UwBW7FvXx pic.twitter.com/NlElsEmtVC
3. SceneFormer: Indoor Scene Generation with Transformers
Xinpeng Wang, Chandan Yeshwanth, Matthias Nießner
The task of indoor scene generation is to generate a sequence of objects, their locations and orientations conditioned on the shape and size of a room. Large scale indoor scene datasets allow us to extract patterns from user-designed indoor scenes and then generate new scenes based on these patterns. Existing methods rely on the 2D or 3D appearance of these scenes in addition to object positions, and make assumptions about the possible relations between objects. In contrast, we do not use any appearance information, and learn relations between objects using the self attention mechanism of transformers. We show that this leads to faster scene generation compared to existing methods with the same or better levels of realism. We build simple and effective generative models conditioned on the room shape, and on text descriptions of the room using only the cross-attention mechanism of transformers. We carried out a user study showing that our generated scenes are preferred over DeepSynth scenes 57.7% of the time for bedroom scenes, and 63.3% for living room scenes. In addition, we generate a scene in 1.48 seconds on average, 20% faster than the state of the art method Fast & Flexible, allowing interactive scene generation.
Generating synthetic scenes using Transformers. https://t.co/ciMeffp2Gq
— Ankur Handa (@ankurhandos) December 18, 2020
Given an empty room, it figures out where to place an object (x, y, z, theta) and its size (l, w, h). All in an autoregressive manner (new object placement conditioned on the objects added already). pic.twitter.com/Li57VV7e2y
SceneFormer: Indoor Scene Generation with Transformers
— AK (@ak92501) December 18, 2020
pdf: https://t.co/wQkgR5de3n
abs: https://t.co/UbmVRHsZ1U pic.twitter.com/trTUm0367s
4. Worldsheet: Wrapping the World in a 3D Sheet for View Synthesis from a Single Image
Ronghang Hu, Deepak Pathak
- retweets: 1482, favorites: 240 (12/21/2020 18:49:39)
- links: abs | pdf
- cs.CV | cs.AI | cs.GR | cs.LG | stat.ML
We present Worldsheet, a method for novel view synthesis using just a single RGB image as input. This is a challenging problem as it requires an understanding of the 3D geometry of the scene as well as texture mapping to generate both visible and occluded regions from new view-points. Our main insight is that simply shrink-wrapping a planar mesh sheet onto the input image, consistent with the learned intermediate depth, captures underlying geometry sufficient enough to generate photorealistic unseen views with arbitrarily large view-point changes. To operationalize this, we propose a novel differentiable texture sampler that allows our wrapped mesh sheet to be textured; which is then transformed into a target image via differentiable rendering. Our approach is category-agnostic, end-to-end trainable without using any 3D supervision and requires a single image at test time. Worldsheet consistently outperforms prior state-of-the-art methods on single-image view synthesis across several datasets. Furthermore, this simple idea captures novel views surprisingly well on a wide range of high resolution in-the-wild images in converting them into a navigable 3D pop-up. Video results and code at https://worldsheet.github.io
Worldsheet: Wrapping the World in a 3D Sheet for View Synthesis from a Single Image
— AK (@ak92501) December 18, 2020
pdf: https://t.co/XKbL6ra3fV
abs: https://t.co/7UkNme2Dbq
project page: https://t.co/QXSWr7Zdqp pic.twitter.com/SQIs3IV8U1
5. ViNG: Learning Open-World Navigation with Visual Goals
Dhruv Shah, Benjamin Eysenbach, Gregory Kahn, Nicholas Rhinehart, Sergey Levine
We propose a learning-based navigation system for reaching visually indicated goals and demonstrate this system on a real mobile robot platform. Learning provides an appealing alternative to conventional methods for robotic navigation: instead of reasoning about environments in terms of geometry and maps, learning can enable a robot to learn about navigational affordances, understand what types of obstacles are traversable (e.g., tall grass) or not (e.g., walls), and generalize over patterns in the environment. However, unlike conventional planning algorithms, it is harder to change the goal for a learned policy during deployment. We propose a method for learning to navigate towards a goal image of the desired destination. By combining a learned policy with a topological graph constructed out of previously observed data, our system can determine how to reach this visually indicated goal even in the presence of variable appearance and lighting. Three key insights, waypoint proposal, graph pruning and negative mining, enable our method to learn to navigate in real-world environments using only offline data, a setting where prior methods struggle. We instantiate our method on a real outdoor ground robot and show that our system, which we call ViNG, outperforms previously-proposed methods for goal-conditioned reinforcement learning, including other methods that incorporate reinforcement learning and search. We also study how ViNG generalizes to unseen environments and evaluate its ability to adapt to such an environment with growing experience. Finally, we demonstrate ViNG on a number of real-world applications, such as last-mile delivery and warehouse inspection. We encourage the reader to check out the videos of our experiments and demonstrations at our project website https://sites.google.com/view/ving-robot
RL enables robots to navigate real-world environments, with diverse visually indicated goals: https://t.co/r6m5yJYrQW
— Sergey Levine (@svlevine) December 18, 2020
w/ @_prieuredesion, B. Eysenbach, G. Kahn, @nick_rhinehart
paper: https://t.co/MRKmGStx6Y
video: https://t.co/RZVVD2pku7
Thread below -> pic.twitter.com/mXD8N89bYc
6. Sparse Signal Models for Data Augmentation in Deep Learning ATR
Tushar Agarwal, Nithin Sugavanam, Emre Ertin
- retweets: 959, favorites: 11 (12/21/2020 18:49:39)
- links: abs | pdf
- cs.CV | cs.LG | eess.IV | eess.SP
Automatic Target Recognition (ATR) algorithms classify a given Synthetic Aperture Radar (SAR) image into one of the known target classes using a set of training images available for each class. Recently, learning methods have shown to achieve state-of-the-art classification accuracy if abundant training data is available, sampled uniformly over the classes, and their poses. In this paper, we consider the task of ATR with a limited set of training images. We propose a data augmentation approach to incorporate domain knowledge and improve the generalization power of a data-intensive learning algorithm, such as a Convolutional neural network (CNN). The proposed data augmentation method employs a limited persistence sparse modeling approach, capitalizing on commonly observed characteristics of wide-angle synthetic aperture radar (SAR) imagery. Specifically, we exploit the sparsity of the scattering centers in the spatial domain and the smoothly-varying structure of the scattering coefficients in the azimuthal domain to solve the ill-posed problem of over-parametrized model fitting. Using this estimated model, we synthesize new images at poses and sub-pixel translations not available in the given data to augment CNN’s training data. The experimental results show that for the training data starved region, the proposed method provides a significant gain in the resulting ATR algorithm’s generalization performance.
Sparse Signal Models for Data Augmentation in Deep Learning ATR. #ArtificialIntelligence #MachineLearning #BigData #Analytics #Python #RStats #JavaScript #ReactJS #Serverless #Linux #IoT #Programming #100DaysOfCode #Coding #DataScience #AI #DeepLearninghttps://t.co/qfZpe6oY2M pic.twitter.com/45CIE8xrkM
— Marcus Borba (@marcusborba) December 18, 2020
7. Decentralized Finance, Centralized Ownership? An Iterative Mapping Process to Measure Protocol Token Distribution
Matthias Nadler, Fabian Schär
In this paper, we analyze various Decentralized Finance (DeFi) protocols in terms of their token distributions. We propose an iterative mapping process that allows us to split aggregate token holdings from custodial and escrow contracts and assign them to their economic beneficiaries. This method accounts for liquidity-, lending-, and staking-pools, as well as token wrappers, and can be used to break down token holdings, even for high nesting levels. We compute individual address balances for several snapshots and analyze intertemporal distribution changes. In addition, we study reallocation and protocol usage data, and propose a proxy for measuring token dependencies and ecosystem integration. The paper offers new insights on DeFi interoperability as well as token ownership distribution and may serve as a foundation for further research.
Decentralized Finance, Centralized Ownership?
— Fabian Schär (@chainomics) December 18, 2020
Read our new working paper on ownership concentration & wrapping complexity in the #DeFi space. This is joint-work w/ Matthias Nadler @mat_nadler!https://t.co/RFWz3URMTd
cc: @MAMA_global @defiprime @defipulse @DeFi_Dad @CamiRusso pic.twitter.com/Ah80BoWfbD
DeFI Unicorn Token Ownership Structure 🔥
— Julien Bouteloup (@bneiluj) December 18, 2020
Nice research paper: "Decentralized Finance, Centralized Ownership? An Iterative Mapping Process to Measure Protocol Token Distribution"https://t.co/L40zR2Sx0q https://t.co/FY3B7VaRMp pic.twitter.com/CYVMV1UjqS
8. Unsupervised Learning of Local Discriminative Representation for Medical Images
Huai Chen, Jieyu Li, Renzhen Wang, Yijie Huang, Fanrui Meng, Deyu Meng, Qing Peng, Lisheng Wang
Local discriminative representation is needed in many medical image analysis tasks such as identifying sub-types of lesion or segmenting detailed components of anatomical structures by measuring similarity of local image regions. However, the commonly applied supervised representation learning methods require a large amount of annotated data, and unsupervised discriminative representation learning distinguishes different images by learning a global feature. In order to avoid the limitations of these two methods and be suitable for localized medical image analysis tasks, we introduce local discrimination into unsupervised representation learning in this work. The model contains two branches: one is an embedding branch which learns an embedding function to disperse dissimilar pixels over a low-dimensional hypersphere; and the other is a clustering branch which learns a clustering function to classify similar pixels into the same cluster. These two branches are trained simultaneously in a mutually beneficial pattern, and the learnt local discriminative representations are able to well measure the similarity of local image regions. These representations can be transferred to enhance various downstream tasks. Meanwhile, they can also be applied to cluster anatomical structures from unlabeled medical images under the guidance of topological priors from simulation or other structures with similar topological characteristics. The effectiveness and usefulness of the proposed method are demonstrated by enhancing various downstream tasks and clustering anatomical structures in retinal images and chest X-ray images. The corresponding code is available at https://github.com/HuaiChen-1994/LDLearning.
Unsupervised Learning of Local Discriminative Representation for Medical Images.#MachineLearning #BigData #Analytics #Python #RStats #JavaScript #ReactJS #Serverless #Linux #ML #IoT #Programming #100DaysOfCode #NeuralNetworks #DataScience #AI #DeepLearninghttps://t.co/3Kb9gYqSxt pic.twitter.com/1fj1kTs38d
— Marcus Borba (@marcusborba) December 19, 2020
9. Projected Distribution Loss for Image Enhancement
Mauricio Delbracio, Hossein Talebi, Peyman Milanfar
Features obtained from object recognition CNNs have been widely used for measuring perceptual similarities between images. Such differentiable metrics can be used as perceptual learning losses to train image enhancement models. However, the choice of the distance function between input and target features may have a consequential impact on the performance of the trained model. While using the norm of the difference between extracted features leads to limited hallucination of details, measuring the distance between distributions of features may generate more textures; yet also more unrealistic details and artifacts. In this paper, we demonstrate that aggregating 1D-Wasserstein distances between CNN activations is more reliable than the existing approaches, and it can significantly improve the perceptual performance of enhancement models. More explicitly, we show that in imaging applications such as denoising, super-resolution, demosaicing, deblurring and JPEG artifact removal, the proposed learning loss outperforms the current state-of-the-art on reference-based perceptual losses. This means that the proposed learning loss can be plugged into different imaging frameworks and produce perceptually realistic results.
We propose a loss function based on aggregate 1D Wasserstein distance on projected feature distributions. More stable with far fewer artifacts; improves on state-of-the-art perceptual quality in denoising, super-res, demosaic, deblur & JPG artifact removalhttps://t.co/RokYHfjdNK pic.twitter.com/Fs5EZH0h0T
— Peyman Milanfar (@docmilanfar) December 19, 2020
10. Deep Molecular Dreaming: Inverse machine learning for de-novo molecular design and interpretability with surjective representations
Cynthia Shen, Mario Krenn, Sagi Eppel, Alan Aspuru-Guzik
- retweets: 302, favorites: 134 (12/21/2020 18:49:40)
- links: abs | pdf
- cs.LG | cs.AI | physics.chem-ph
Computer-based de-novo design of functional molecules is one of the most prominent challenges in cheminformatics today. As a result, generative and evolutionary inverse designs from the field of artificial intelligence have emerged at a rapid pace, with aims to optimize molecules for a particular chemical property. These models ‘indirectly’ explore the chemical space; by learning latent spaces, policies, distributions or by applying mutations on populations of molecules. However, the recent development of the SELFIES string representation of molecules, a surjective alternative to SMILES, have made possible other potential techniques. Based on SELFIES, we therefore propose PASITHEA, a direct gradient-based molecule optimization that applies inceptionism techniques from computer vision. PASITHEA exploits the use of gradients by directly reversing the learning process of a neural network, which is trained to predict real-valued chemical properties. Effectively, this forms an inverse regression model, which is capable of generating molecular variants optimized for a certain property. Although our results are preliminary, we observe a shift in distribution of a chosen property during inverse-training, a clear indication of PASITHEA’s viability. A striking property of inceptionism is that we can directly probe the model’s understanding of the chemical space it was trained on. We expect that extending PASITHEA to larger datasets, molecules and more complex properties will lead to advances in the design of new functional molecules as well as the interpretation and explanation of machine learning models.
Ever heard of DeepDreaming? It's a method to inspect the inner working of #NeuralNets &to create amazing dreamlike images.
— Mario Krenn (@MarioKrenn6240) December 18, 2020
We adapt this great idea for molecular design: https://t.co/ezAcv6Zdfc
spearheaded by @UofT undergrad Cynthia Shen, w/ S.Eppel, @A_Aspuru_Guzik #matterlab pic.twitter.com/BIC4F8faln
In this work with Cynthia Shen, @EppelSagi and @MarioKrenn6240 we develop a deep dreaming algorithm for molecular design using SELFIES and we call it PASITHEA. Learn more about it here: https://t.co/Y9YKzbOpIp #matterlab @UofT @UofTCompSci @chemuoft @VectorInst #compchem https://t.co/w65VIQ40Iz
— Alan Aspuru-Guzik (@A_Aspuru_Guzik) December 18, 2020
11. A Generalization of Transformer Networks to Graphs
Vijay Prakash Dwivedi, Xavier Bresson
We propose a generalization of transformer neural network architecture for arbitrary graphs. The original transformer was designed for Natural Language Processing (NLP), which operates on fully connected graphs representing all connections between the words in a sequence. Such architecture does not leverage the graph connectivity inductive bias, and can perform poorly when the graph topology is important and has not been encoded into the node features. We introduce a graph transformer with four new properties compared to the standard model. First, the attention mechanism is a function of the neighborhood connectivity for each node in the graph. Second, the positional encoding is represented by the Laplacian eigenvectors, which naturally generalize the sinusoidal positional encodings often used in NLP. Third, the layer normalization is replaced by a batch normalization layer, which provides faster training and better generalization performance. Finally, the architecture is extended to edge feature representation, which can be critical to tasks s.a. chemistry (bond type) or link prediction (entity relationship in knowledge graphs). Numerical experiments on a graph benchmark demonstrate the performance of the proposed graph transformer architecture. This work closes the gap between the original transformer, which was designed for the limited case of line graphs, and graph neural networks, that can work with arbitrary graphs. As our architecture is simple and generic, we believe it can be used as a black box for future applications that wish to consider transformer and graphs.
A Generalization of Transformer Networks to Graphs
— AK (@ak92501) December 18, 2020
pdf: https://t.co/NSemyGeKIG
abs: https://t.co/zjQBkDtqZ4
github: https://t.co/42kkBMvPKE pic.twitter.com/IC44D5jx7v
A Generalization of Transformer Networks to Graphs - A generalization of transformer neural network architecture for arbitrary graphs
— Philip Vollet (@philipvollet) December 18, 2020
Paper https://t.co/LbSywLsozM
GitHub https://t.co/erz4dHrDaW#nlproc #MachineLearning pic.twitter.com/gOkR0byhKq
12. Transformer Interpretability Beyond Attention Visualization
Hila Chefer, Shir Gur, Lior Wolf
Self-attention techniques, and specifically Transformers, are dominating the field of text processing and are becoming increasingly popular in computer vision classification tasks. In order to visualize the parts of the image that led to a certain classification, existing methods either rely on the obtained attention maps, or employ heuristic propagation along the attention graph. In this work, we propose a novel way to compute relevancy for Transformer networks. The method assigns local relevance based on the deep Taylor decomposition principle and then propagates these relevancy scores through the layers. This propagation involves attention layers and skip connections, which challenge existing methods. Our solution is based on a specific formulation that is shown to maintain the total relevancy across layers. We benchmark our method on very recent visual Transformer networks, as well as on a text classification problem, and demonstrate a clear advantage over the existing explainability methods.
Transformer Interpretability Beyond Attention Visualization
— AK (@ak92501) December 18, 2020
pdf: https://t.co/2wAO7aOmJS
abs: https://t.co/Lzcnnu8316
github: https://t.co/Ed2iSf8L2h pic.twitter.com/fVLaaNQjXc
13. Polyblur: Removing mild blur by polynomial reblurring
Mauricio Delbracio, Ignacio Garcia-Dorado, Sungjoon Choi, Damien Kelly, Peyman Milanfar
We present a highly efficient blind restoration method to remove mild blur in natural images. Contrary to the mainstream, we focus on removing slight blur that is often present, damaging image quality and commonly generated by small out-of-focus, lens blur, or slight camera motion. The proposed algorithm first estimates image blur and then compensates for it by combining multiple applications of the estimated blur in a principled way. To estimate blur we introduce a simple yet robust algorithm based on empirical observations about the distribution of the gradient in sharp natural images. Our experiments show that, in the context of mild blur, the proposed method outperforms traditional and modern blind deblurring methods and runs in a fraction of the time. Our method can be used to blindly correct blur before applying off-the-shelf deep super-resolution methods leading to superior results than other highly complex and computationally demanding techniques. The proposed method estimates and removes mild blur from a 12MP image on a modern mobile phone in a fraction of a second.
Deblurring is unstable. Undoing "mild" blur is far more stable. We 1st estimate the blur & then deblur by aggregating repeated applications of the *same* estimated blur. For mild blur, we outperform even deep methods, yet run in a fraction of the time. 1/2https://t.co/dfbcPvYpk9 pic.twitter.com/feJw6nDQUO
— Peyman Milanfar (@docmilanfar) December 21, 2020
14. Neural Radiance Flow for 4D View Synthesis and Video Processing
Yilun Du, Yinan Zhang, Hong-Xing Yu, Joshua B. Tenenbaum, Jiajun Wu
We present a method, Neural Radiance Flow (NeRFlow),to learn a 4D spatial-temporal representation of a dynamic scene from a set of RGB images. Key to our approach is the use of a neural implicit representation that learns to capture the 3D occupancy, radiance, and dynamics of the scene. By enforcing consistency across different modalities, our representation enables multi-view rendering in diverse dynamic scenes, including water pouring, robotic interaction, and real images, outperforming state-of-the-art methods for spatial-temporal view synthesis. Our approach works even when inputs images are captured with only one camera. We further demonstrate that the learned representation can serve as an implicit scene prior, enabling video processing tasks such as image super-resolution and de-noising without any additional supervision.
Neural Radiance Flow for 4D View Synthesis and Video Processing
— AK (@ak92501) December 18, 2020
pdf: https://t.co/u3MxBKXLLz
abs: https://t.co/5VyWAZn2ol
project page: https://t.co/JIIqXAUa7H pic.twitter.com/YTvR7T7c0q
15. Infinite Nature: Perpetual View Generation of Natural Scenes from a Single Image
Andrew Liu, Richard Tucker, Varun Jampani, Ameesh Makadia, Noah Snavely, Angjoo Kanazawa
We introduce the problem of perpetual view generation — long-range generation of novel views corresponding to an arbitrarily long camera trajectory given a single image. This is a challenging problem that goes far beyond the capabilities of current view synthesis methods, which work for a limited range of viewpoints and quickly degenerate when presented with a large camera motion. Methods designed for video generation also have limited ability to produce long video sequences and are often agnostic to scene geometry. We take a hybrid approach that integrates both geometry and image synthesis in an iterative render, refine, and repeat framework, allowing for long-range generation that cover large distances after hundreds of frames. Our approach can be trained from a set of monocular video sequences without any manual annotation. We propose a dataset of aerial footage of natural coastal scenes, and compare our method with recent view synthesis and conditional video generation baselines, showing that it can generate plausible scenes for much longer time horizons over large camera trajectories compared to existing methods. Please visit our project page at https://infinite-nature.github.io/.
Infinite Nature: Perpetual View Generation of Natural Scenes from a Single Image
— AK (@ak92501) December 18, 2020
pdf: https://t.co/3IcEn1jEyr
abs: https://t.co/MAwmKLO2XO pic.twitter.com/D2WTRsB1Zm
16. BERT Goes Shopping: Comparing Distributional Models for Product Representations
Federico Bianchi, Bingqing Yu, Jacopo Tagliabue
Word embeddings (e.g., word2vec) have been applied successfully to eCommerce products through prod2vec. Inspired by the recent performance improvements on several NLP tasks brought by contextualized embeddings, we propose to transfer BERT-like architectures to eCommerce: our model — ProdBERT — is trained to generate representations of products through masked session modeling. Through extensive experiments over multiple shops, different tasks, and a range of design choices, we systematically compare the accuracy of ProdBERT and prod2vec embeddings: while ProdBERT is found to be superior to traditional methods in several scenarios, we highlight the importance of resources and hyperparameters in the best performing models. Finally, we conclude by providing guidelines for training embeddings under a variety of computational and data constraints.
Happy to share our new pre-print: "BERT Goes Shopping: Comparing Distributional Models for Product Representations" with @christineyyuu and @jacopotagliabue
— Federico Bianchi (@fb_vinid) December 18, 2020
We introduce ProdBERT for eCommerce product representations.
pre-print: https://t.co/B1tCURnJqa#NLProc #ecommerce #ai pic.twitter.com/YFDOpwyWl9
17. Classifying Sequences of Extreme Length with Constant Memory Applied to Malware Detection
Edward Raff, William Fleshman, Richard Zak, Hyrum S. Anderson, Bobby Filar, Mark McLean
Recent works within machine learning have been tackling inputs of ever-increasing size, with cybersecurity presenting sequence classification problems of particularly extreme lengths. In the case of Windows executable malware detection, inputs may exceed MB, which corresponds to a time series with steps. To date, the closest approach to handling such a task is MalConv, a convolutional neural network capable of processing up to steps. The memory of CNNs has prevented further application of CNNs to malware. In this work, we develop a new approach to temporal max pooling that makes the required memory invariant to the sequence length . This makes MalConv more memory efficient, and up to faster to train on its original dataset, while removing the input length restrictions to MalConv. We re-invest these gains into improving the MalConv architecture by developing a new Global Channel Gating design, giving us an attention mechanism capable of learning feature interactions across 100 million time steps in an efficient manner, a capability lacked by the original MalConv CNN. Our implementation can be found at https://github.com/NeuromorphicComputationResearchProgram/MalConv2
Classifying Sequences of Extreme Length with Constant Memory Applied to Malware Detection
— Thomas (@evolvingstuff) December 18, 2020
"an attention mechanism capable of learning feature interactions across 100 million time steps in an efficient manner"https://t.co/9kQfVp6kmshttps://t.co/QevfvdGbCm pic.twitter.com/B3TSXG21op
18. Learning to Recover 3D Scene Shape from a Single Image
Wei Yin, Jianming Zhang, Oliver Wang, Simon Niklaus, Long Mai, Simon Chen, Chunhua Shen
Despite significant progress in monocular depth estimation in the wild, recent state-of-the-art methods cannot be used to recover accurate 3D scene shape due to an unknown depth shift induced by shift-invariant reconstruction losses used in mixed-data depth prediction training, and possible unknown camera focal length. We investigate this problem in detail, and propose a two-stage framework that first predicts depth up to an unknown scale and shift from a single monocular image, and then use 3D point cloud encoders to predict the missing depth shift and focal length that allow us to recover a realistic 3D scene shape. In addition, we propose an image-level normalized regression loss and a normal-based geometry loss to enhance depth prediction models trained on mixed datasets. We test our depth model on nine unseen datasets and achieve state-of-the-art performance on zero-shot dataset generalization. Code is available at: https://git.io/Depth
Learning to Recover 3D Scene Shape from a Single Image
— AK (@ak92501) December 18, 2020
pdf: https://t.co/EDVxkr8U9g
abs: https://t.co/jaOohdPu4C pic.twitter.com/SzYN0NyQKI
19. Parallel WaveNet conditioned on VAE latent vectors
Jonas Rohnke, Tom Merritt, Jaime Lorenzo-Trueba, Adam Gabrys, Vatsal Aggarwal, Alexis Moinet, Roberto Barra-Chicote
Recently the state-of-the-art text-to-speech synthesis systems have shifted to a two-model approach: a sequence-to-sequence model to predict a representation of speech (typically mel-spectrograms), followed by a ‘neural vocoder’ model which produces the time-domain speech waveform from this intermediate speech representation. This approach is capable of synthesizing speech that is confusable with natural speech recordings. However, the inference speed of neural vocoder approaches represents a major obstacle for deploying this technology for commercial applications. Parallel WaveNet is one approach which has been developed to address this issue, trading off some synthesis quality for significantly faster inference speed. In this paper we investigate the use of a sentence-level conditioning vector to improve the signal quality of a Parallel WaveNet neural vocoder. We condition the neural vocoder with the latent vector from a pre-trained VAE component of a Tacotron 2-style sequence-to-sequence model. With this, we are able to significantly improve the quality of vocoded speech.
Parallel WaveNet conditioned on VAE latent vectors
— AK (@ak92501) December 18, 2020
pdf: https://t.co/12qVuv0Oqa
abs: https://t.co/8u4VYpKlw6 pic.twitter.com/Zi7vCg5QbN
20. Task Uncertainty Loss Reduce Negative Transfer in Asymmetric Multi-task Feature Learning
Rafael Peres da Silva, Chayaporn Suphavilai, Niranjan Nagarajan
Multi-task learning (MTL) is frequently used in settings where a target task has to be learnt based on limited training data, but knowledge can be leveraged from related auxiliary tasks. While MTL can improve task performance overall relative to single-task learning (STL), these improvements can hide negative transfer (NT), where STL may deliver better performance for many individual tasks. Asymmetric multitask feature learning (AMTFL) is an approach that tries to address this by allowing tasks with higher loss values to have smaller influence on feature representations for learning other tasks. Task loss values do not necessarily indicate reliability of models for a specific task. We present examples of NT in two orthogonal datasets (image recognition and pharmacogenomics) and tackle this challenge by using aleatoric homoscedastic uncertainty to capture the relative confidence between tasks, and set weights for task loss. Our results show that this approach reduces NT providing a new approach to enable robust MTL.
21. Self-Supervised Sketch-to-Image Synthesis
Bingchen Liu, Yizhe Zhu, Kunpeng Song, Ahmed Elgammal
Imagining a colored realistic image from an arbitrarily drawn sketch is one of the human capabilities that we eager machines to mimic. Unlike previous methods that either requires the sketch-image pairs or utilize low-quantity detected edges as sketches, we study the exemplar-based sketch-to-image (s2i) synthesis task in a self-supervised learning manner, eliminating the necessity of the paired sketch data. To this end, we first propose an unsupervised method to efficiently synthesize line-sketches for general RGB-only datasets. With the synthetic paired-data, we then present a self-supervised Auto-Encoder (AE) to decouple the content/style features from sketches and RGB-images, and synthesize images that are both content-faithful to the sketches and style-consistent to the RGB-images. While prior works employ either the cycle-consistence loss or dedicated attentional modules to enforce the content/style fidelity, we show AE’s superior performance with pure self-supervisions. To further improve the synthesis quality in high resolution, we also leverage an adversarial network to refine the details of synthetic images. Extensive experiments on 1024*1024 resolution demonstrate a new state-of-art-art performance of the proposed model on CelebA-HQ and Wiki-Art datasets. Moreover, with the proposed sketch generator, the model shows a promising performance on style mixing and style transfer, which require synthesized images to be both style-consistent and semantically meaningful. Our code is available on https://github.com/odegeasslbc/Self-Supervised-Sketch-to-Image-Synthesis-PyTorch, and please visit https://create.playform.io/my-projects?mode=sketch for an online demo of our model.
Self-Supervised Sketch-to-Image Synthesis
— AK (@ak92501) December 18, 2020
pdf: https://t.co/HthysJ8HzS
abs: https://t.co/acVBlkImCx
github: https://t.co/OFQdsw6Haa pic.twitter.com/58nSLPakKg
22. Computation-Efficient Knowledge Distillation via Uncertainty-Aware Mixup
Guodong Xu, Ziwei Liu, Chen Change Loy
Knowledge distillation, which involves extracting the “dark knowledge” from a teacher network to guide the learning of a student network, has emerged as an essential technique for model compression and transfer learning. Unlike previous works that focus on the accuracy of student network, here we study a little-explored but important question, i.e., knowledge distillation efficiency. Our goal is to achieve a performance comparable to conventional knowledge distillation with a lower computation cost during training. We show that the UNcertainty-aware mIXup (UNIX) can serve as a clean yet effective solution. The uncertainty sampling strategy is used to evaluate the informativeness of each training sample. Adaptive mixup is applied to uncertain samples to compact knowledge. We further show that the redundancy of conventional knowledge distillation lies in the excessive learning of easy samples. By combining uncertainty and mixup, our approach reduces the redundancy and makes better use of each query to the teacher network. We validate our approach on CIFAR100 and ImageNet. Notably, with only 79% computation cost, we outperform conventional knowledge distillation on CIFAR100 and achieve a comparable result on ImageNet.
Computation-Efficient Knowledge Distillation via Uncertainty-Aware Mixuphttps://t.co/IvLQNeb5Hc
— phalanx (@ZFPhalanx) December 18, 2020
生徒モデルにとって難しいサンプルのみを使用して教師モデルの計算コストを削減。mixupはaugmentationではなくkdの観点から使用(詳しくは論文)。何かのコンペで使いたい。 pic.twitter.com/OqSk6VhFhQ
23. End-to-End Human Pose and Mesh Reconstruction with Transformers
Kevin Lin, Lijuan Wang, Zicheng Liu
We present a new method, called MEsh TRansfOrmer (METRO), to reconstruct 3D human pose and mesh vertices from a single image. Our method uses a transformer encoder to jointly model vertex-vertex and vertex-joint interactions, and outputs 3D joint coordinates and mesh vertices simultaneously. Compared to existing techniques that regress pose and shape parameters, METRO does not rely on any parametric mesh models like SMPL, thus it can be easily extended to other objects such as hands. We further relax the mesh topology and allow the transformer self-attention mechanism to freely attend between any two vertices, making it possible to learn non-local relationships among mesh vertices and joints. With the proposed masked vertex modeling, our method is more robust and effective in handling challenging situations like partial occlusions. METRO generates new state-of-the-art results for human mesh reconstruction on the public Human3.6M and 3DPW datasets. Moreover, we demonstrate the generalizability of METRO to 3D hand reconstruction in the wild, outperforming existing state-of-the-art methods on FreiHAND dataset.
End-to-End Human Pose and Mesh Reconstruction with Transformers
— AK (@ak92501) December 18, 2020
pdf: https://t.co/lbv4euzZ07
abs: https://t.co/PjYbmYhGC8 pic.twitter.com/fQHcdzFY1B
24. On the experimental feasibility of quantum state reconstruction via machine learning
Sanjaya Lohani, Thomas A. Searles, Brian T. Kirby, Ryan T. Glasser
We determine the resource scaling of machine learning-based quantum state reconstruction methods, in terms of both inference and training, for systems of up to four qubits. Further, we examine system performance in the low-count regime, likely to be encountered in the tomography of high-dimensional systems. Finally, we implement our quantum state reconstruction method on a IBM Q quantum computer and confirm our results.
25. Towards Resolving the Implicit Bias of Gradient Descent for Matrix Factorization: Greedy Low-Rank Learning
Zhiyuan Li, Yuping Luo, Kaifeng Lyu
Matrix factorization is a simple and natural test-bed to investigate the implicit regularization of gradient descent. Gunasekar et al. (2018) conjectured that Gradient Flow with infinitesimal initialization converges to the solution that minimizes the nuclear norm, but a series of recent papers argued that the language of norm minimization is not sufficient to give a full characterization for the implicit regularization. In this work, we provide theoretical and empirical evidence that for depth-2 matrix factorization, gradient flow with infinitesimal initialization is mathematically equivalent to a simple heuristic rank minimization algorithm, Greedy Low-Rank Learning, under some reasonable assumptions. This generalizes the rank minimization view from previous works to a much broader setting and enables us to construct counter-examples to refute the conjecture from Gunasekar et al. (2018). We also extend the results to the case where depth , and we show that the benefit of being deeper is that the above convergence has a much weaker dependence over initialization magnitude so that this rank minimization is more likely to take effect for initialization with practical scale.
For matrix factorization, GD + tiny init is equivalent to a heuristic rank-minimization algorithm, GLRL.
— Zhiyuan Li (@zhiyuanli_) December 18, 2020
This negatively resolves the conjecture by Gunasekar et al., 2017, GD + tiny init minimizes nuclear norm.
paper: https://t.co/cWrq6OfERd
w/ @luo_yuping, @vfleaking
(1/4) pic.twitter.com/aPlx5w67Bs
26. PCT: Point Cloud Transformer
Meng-Hao Guo, Jun-Xiong Cai, Zheng-Ning Liu, Tai-Jiang Mu, Ralph R. Martin, Shi-Min Hu
The irregular domain and lack of ordering make it challenging to design deep neural networks for point cloud processing. This paper presents a novel framework named Point Cloud Transformer(PCT) for point cloud learning. PCT is based on Transformer, which achieves huge success in natural language processing and displays great potential in image processing. It is inherently permutation invariant for processing a sequence of points, making it well-suited for point cloud learning. To better capture local context within the point cloud, we enhance input embedding with the support of farthest point sampling and nearest neighbor search. Extensive experiments demonstrate that the PCT achieves the state-of-the-art performance on shape classification, part segmentation and normal estimation tasks.
PCT: Point Cloud Transformer
— AK (@ak92501) December 18, 2020
pdf: https://t.co/ol61Z3dKZb
abs: https://t.co/HtUBIfaM3X pic.twitter.com/XAWugJ8DeS
27. Learning Cross-Domain Correspondence for Control with Dynamics Cycle-Consistency
Qiang Zhang, Tete Xiao, Alexei A. Efros, Lerrel Pinto, Xiaolong Wang
At the heart of many robotics problems is the challenge of learning correspondences across domains. For instance, imitation learning requires obtaining correspondence between humans and robots; sim-to-real requires correspondence between physics simulators and the real world; transfer learning requires correspondences between different robotics environments. This paper aims to learn correspondence across domains differing in representation (vision vs. internal state), physics parameters (mass and friction), and morphology (number of limbs). Importantly, correspondences are learned using unpaired and randomly collected data from the two domains. We propose \textit{dynamics cycles} that align dynamic robot behavior across two domains using a cycle-consistency constraint. Once this correspondence is found, we can directly transfer the policy trained on one domain to the other, without needing any additional fine-tuning on the second domain. We perform experiments across a variety of problem domains, both in simulation and on real robot. Our framework is able to align uncalibrated monocular video of a real robot arm to dynamic state-action trajectories of a simulated arm without paired data. Video demonstrations of our results are available at: https://sjtuzq.github.io/cycle_dynamics.html .
After adding time in cycles, it is time to add dynamics in cycles (https://t.co/dcNojvuVVM).
— Xiaolong Wang (@xiaolonw) December 18, 2020
We add a forward dynamics model in CycleGAN to learn correspondence and align dynamic robot behavior across two domains differing in observed representation, physics, and morphology. pic.twitter.com/2WLwFY0TXa
28. Roof-GAN: Learning to Generate Roof Geometry and Relations for Residential Houses
Yiming Qian, Hao Zhang, Yasutaka Furukawa
This paper presents Roof-GAN, a novel generative adversarial network that generates structured geometry of residential roof structures as a set of roof primitives and their relationships. Given the number of primitives, the generator produces a structured roof model as a graph, which consists of 1) primitive geometry as raster images at each node, encoding facet segmentation and angles; 2) inter-primitive colinear/coplanar relationships at each edge; and 3) primitive geometry in a vector format at each node, generated by a novel differentiable vectorizer while enforcing the relationships. The discriminator is trained to assess the primitive raster geometry, the primitive relationships, and the primitive vector geometry in a fully end-to-end architecture. Qualitative and quantitative evaluations demonstrate the effectiveness of our approach in generating diverse and realistic roof models over the competing methods with a novel metric proposed in this paper for the task of structured geometry generation. We will share our code and data.
Roof-GAN: Learning to Generate Roof Geometry and Relations for Residential Houses
— AK (@ak92501) December 18, 2020
pdf: https://t.co/7ZjS7RkM5L
abs: https://t.co/C0hX3br5nU pic.twitter.com/32eyiJa4VN