1. AST: Audio Spectrogram Transformer
Yuan Gong, Yu-An Chung, James Glass
In the past decade, convolutional neural networks (CNNs) have been widely adopted as the main building block for end-to-end audio classification models, which aim to learn a direct mapping from audio spectrograms to corresponding labels. To better capture long-range global context, a recent trend is to add a self-attention mechanism on top of the CNN, forming a CNN-attention hybrid model. However, it is unclear whether the reliance on a CNN is necessary, and if neural networks purely based on attention are sufficient to obtain good performance in audio classification. In this paper, we answer the question by introducing the Audio Spectrogram Transformer (AST), the first convolution-free, purely attention-based model for audio classification. We evaluate AST on various audio classification benchmarks, where it achieves new state-of-the-art results of 0.485 mAP on AudioSet, 95.6% accuracy on ESC-50, and 98.1% accuracy on Speech Commands V2.
AST: Audio Spectrogram Transformer
— Aran Komatsuzaki (@arankomatsuzaki) April 6, 2021
Proposes Audio Spectrogram Transformer, the first purely attention-based model for audio classification that achieves the new SotA on various benchmarks.https://t.co/LbW5zb5hDM pic.twitter.com/sZiqyamFLY
AST: Audio Spectrogram Transformer
— AK (@ak92501) April 6, 2021
pdf: https://t.co/mbtNQap583
abs: https://t.co/5PhyMokTkS pic.twitter.com/hC3qr4uxQt
2. An Empirical Study of Training Self-Supervised Visual Transformers
Xinlei Chen, Saining Xie, Kaiming He
This paper does not describe a novel method. Instead, it studies a straightforward, incremental, yet must-know baseline given the recent progress in computer vision: self-supervised learning for Visual Transformers (ViT). While the training recipes for standard convolutional networks have been highly mature and robust, the recipes for ViT are yet to be built, especially in the self-supervised scenarios where training becomes more challenging. In this work, we go back to basics and investigate the effects of several fundamental components for training self-supervised ViT. We observe that instability is a major issue that degrades accuracy, and it can be hidden by apparently good results. We reveal that these results are indeed partial failure, and they can be improved when training is made more stable. We benchmark ViT results in MoCo v3 and several other self-supervised frameworks, with ablations in various aspects. We discuss the currently positive evidence as well as challenges and open questions. We hope that this work will provide useful data points and experience for future research.
An Empirical Study of Training Self-Supervised Visual Transformers
— Aran Komatsuzaki (@arankomatsuzaki) April 6, 2021
Observes that self-supervised ViT has a major instability issue that degrades accuracy, and this can be improved when training is made more stable.https://t.co/4LKsEjYEes pic.twitter.com/fZ4u0m5TLT
An Empirical Study of Training Self-Supervised Visual Transformers
— AK (@ak92501) April 6, 2021
pdf: https://t.co/vDLciuid1W
abs: https://t.co/DaXNSi5MDZ pic.twitter.com/7Y5dqBF24Q
3. Removing Pixel Noises and Spatial Artifacts with Generative Diversity Denoising Methods
Mangal Prakash, Mauricio Delbracio, Peyman Milanfar, Florian Jug
- retweets: 522, favorites: 150 (04/07/2021 10:15:28)
- links: abs | pdf
- eess.IV | cs.CV | cs.LG | q-bio.QM
Image denoising and artefact removal are complex inverse problems admitting many potential solutions. Variational Autoencoders (VAEs) can be used to learn a whole distribution of sensible solutions, from which one can sample efficiently. However, such a generative approach to image restoration is only studied in the context of pixel-wise noise removal (e.g. Poisson or Gaussian noise). While important, a plethora of application domains suffer from imaging artefacts (structured noises) that alter groups of pixels in correlated ways. In this work we show, for the first time, that generative diversity denoising (GDD) approaches can learn to remove structured noises without supervision. To this end, we investigate two existing GDD architectures, introduce a new one based on hierarchical VAEs, and compare their performances against a total of seven state-of-the-art baseline methods on five sources of structured noise (including tomography reconstruction artefacts and microscopy artefacts). We find that GDD methods outperform all unsupervised baselines and in many cases not lagging far behind supervised results (in some occasions even superseding them). In addition to structured noise removal, we also show that our new GDD method produces new state-of-the-art (SOTA) results on seven out of eight benchmark datasets for pixel-noise removal. Finally, we offer insights into the daunting question of how GDD methods distinguish structured noise, which we like to see removed, from image signals, which we want to see retained.
In @Mangal_Prakash_’s latest work, we teamed up with @docmilanfar and @2ptmvd and show how Generative Diversity Denoising methods can not only lead to SOTA denoising results, but also used to remove spatial artifacts (e.g. in tomography)!#HDivNoisinghttps://t.co/adZfKHWhxP pic.twitter.com/fWcnzWE2bM
— Florian Jug (@florianjug) April 6, 2021
Removing Pixel Noises and Spatial Artifacts with Generative Diversity Denoising Methods
— AK (@ak92501) April 6, 2021
pdf: https://t.co/75D9hhAe4f
abs: https://t.co/lWROxgm0c9 pic.twitter.com/gVs6cTRvDJ
4. Efficient DETR: Improving End-to-End Object Detector with Dense Prior
Zhuyu Yao, Jiangbo Ai, Boxun Li, Chi Zhang
The recently proposed end-to-end transformer detectors, such as DETR and Deformable DETR, have a cascade structure of stacking 6 decoder layers to update object queries iteratively, without which their performance degrades seriously. In this paper, we investigate that the random initialization of object containers, which include object queries and reference points, is mainly responsible for the requirement of multiple iterations. Based on our findings, we propose Efficient DETR, a simple and efficient pipeline for end-to-end object detection. By taking advantage of both dense detection and sparse set detection, Efficient DETR leverages dense prior to initialize the object containers and brings the gap of the 1-decoder structure and 6-decoder structure. Experiments conducted on MS COCO show that our method, with only 3 encoder layers and 1 decoder layer, achieves competitive performance with state-of-the-art object detection methods. Efficient DETR is also robust in crowded scenes. It outperforms modern detectors on CrowdHuman dataset by a large margin.
Efficient DETR: Improving End-to-End Object Detector with Dense Prior
— AK (@ak92501) April 6, 2021
pdf: https://t.co/vC9MiooCnz
abs: https://t.co/WGgBSihQM5 pic.twitter.com/h27NpI2crA
Efficient DETR: Improving End-to-End Object Detector with Dense Priorhttps://t.co/CCRybhms9W pic.twitter.com/0M9MQrjBKH
— phalanx (@ZFPhalanx) April 6, 2021
5. Convolutional Neural Opacity Radiance Fields
Haimin Luo, Anpei Chen, Qixuan Zhang, Bai Pang, Minye Wu, Lan Xu, Jingyi Yu
Photo-realistic modeling and rendering of fuzzy objects with complex opacity are critical for numerous immersive VR/AR applications, but it suffers from strong view-dependent brightness, color. In this paper, we propose a novel scheme to generate opacity radiance fields with a convolutional neural renderer for fuzzy objects, which is the first to combine both explicit opacity supervision and convolutional mechanism into the neural radiance field framework so as to enable high-quality appearance and global consistent alpha mattes generation in arbitrary novel views. More specifically, we propose an efficient sampling strategy along with both the camera rays and image plane, which enables efficient radiance field sampling and learning in a patch-wise manner, as well as a novel volumetric feature integration scheme that generates per-patch hybrid feature embeddings to reconstruct the view-consistent fine-detailed appearance and opacity output. We further adopt a patch-wise adversarial training scheme to preserve both high-frequency appearance and opacity details in a self-supervised framework. We also introduce an effective multi-view image capture system to capture high-quality color and alpha maps for challenging fuzzy objects. Extensive experiments on existing and our new challenging fuzzy object dataset demonstrate that our method achieves photo-realistic, globally consistent, and fined detailed appearance and opacity free-viewpoint rendering for various fuzzy objects.
Convolutional Neural Opacity Radiance Fields
— AK (@ak92501) April 6, 2021
pdf: https://t.co/TF69E8Yfde
abs: https://t.co/848c7VHVNA pic.twitter.com/rNvBY6LkC9
6. Generating Furry Cars: Disentangling Object Shape & Appearance across Multiple Domains
Utkarsh Ojha, Krishna Kumar Singh, Yong Jae Lee
We consider the novel task of learning disentangled representations of object shape and appearance across multiple domains (e.g., dogs and cars). The goal is to learn a generative model that learns an intermediate distribution, which borrows a subset of properties from each domain, enabling the generation of images that did not exist in any domain exclusively. This challenging problem requires an accurate disentanglement of object shape, appearance, and background from each domain, so that the appearance and shape factors from the two domains can be interchanged. We augment an existing approach that can disentangle factors within a single domain but struggles to do so across domains. Our key technical contribution is to represent object appearance with a differentiable histogram of visual features, and to optimize the generator so that two images with the same latent appearance factor but different latent shape factors produce similar histograms. On multiple multi-domain datasets, we demonstrate our method leads to accurate and consistent appearance and shape transfer across domains.
Generating Furry Cars: Disentangling Object Shape & Appearance across Multiple Domains
— AK (@ak92501) April 6, 2021
pdf: https://t.co/M6vcCiAWuP
abs: https://t.co/T7U1PbIYE0
project page: https://t.co/k3DGXXbm5U pic.twitter.com/p8JZRPmjC6
https://t.co/H9SI3iteL9
— Hカップの論文を読むタイプのMLエンジニアver0.02 (@SupervisedAi) April 6, 2021
全く異なるドメイン同士の形状と表面模様を組み合わせた新しい画像を生成できるドメイン変換の提案論文
異なるドメインの形状と表面模様を独立して制御して融合させた画像が出力できる。
従来モデルではできない毛の生えた車などの複雑な生成制御が可能
(1/3) pic.twitter.com/QRixt5hrnv
7. Hierarchical Pyramid Representations for Semantic Segmentation
Hiroaki Aizawa, Yukihiro Domae, Kunihito Kato
Understanding the context of complex and cluttered scenes is a challenging problem for semantic segmentation. However, it is difficult to model the context without prior and additional supervision because the scene’s factors, such as the scale, shape, and appearance of objects, vary considerably in these scenes. To solve this, we propose to learn the structures of objects and the hierarchy among objects because context is based on these intrinsic properties. In this study, we design novel hierarchical, contextual, and multiscale pyramidal representations to capture the properties from an input image. Our key idea is the recursive segmentation in different hierarchical regions based on a predefined number of regions and the aggregation of the context in these regions. The aggregated contexts are used to predict the contextual relationship between the regions and partition the regions in the following hierarchical level. Finally, by constructing the pyramid representations from the recursively aggregated context, multiscale and hierarchical properties are attained. In the experiments, we confirmed that our proposed method achieves state-of-the-art performance in PASCAL Context.
arxivに論文を公開しました!
— hiroaki aizawa (@aizw_h) April 6, 2021
階層的に文脈を収集する方法を提案しました
abs: https://t.co/GnKO4yVO2Y
project page: https://t.co/pMWtUXnIGy pic.twitter.com/89YjHCKfUC
Hierarchical Pyramid Representations for Semantic Segmentation
— AK (@ak92501) April 6, 2021
pdf: https://t.co/d9yQAXbrPm
abs: https://t.co/c1oPrie1BC pic.twitter.com/whdIBbiSZt
8. Timers and Such: A Practical Benchmark for Spoken Language Understanding with Numbers
Loren Lugosch, Piyush Papreja, Mirco Ravanelli, Abdelwahab Heba, Titouan Parcollet
This paper introduces Timers and Such, a new open source dataset of spoken English commands for common voice control use cases involving numbers. We describe the gap in existing spoken language understanding datasets that Timers and Such fills, the design and creation of the dataset, and experiments with a number of ASR-based and end-to-end baseline models, the code for which has been made available as part of the SpeechBrain toolkit.
Timers and Such! New speech dataset for timers, alarms, unit conversion, and math.
— Loren Lugosch (@lorenlugosch) April 6, 2021
Use it to test your speech models or to train your own offline voice assistant.
Paper: https://t.co/1Itx4KKVyo
Code: https://t.co/WxJctrnXXI
Pretrained model @huggingface: https://t.co/arrKjGLPJY
9. Efficient Transformers in Reinforcement Learning using Actor-Learner Distillation
Emilio Parisotto, Ruslan Salakhutdinov
Many real-world applications such as robotics provide hard constraints on power and compute that limit the viable model complexity of Reinforcement Learning (RL) agents. Similarly, in many distributed RL settings, acting is done on un-accelerated hardware such as CPUs, which likewise restricts model size to prevent intractable experiment run times. These “actor-latency” constrained settings present a major obstruction to the scaling up of model complexity that has recently been extremely successful in supervised learning. To be able to utilize large model capacity while still operating within the limits imposed by the system during acting, we develop an “Actor-Learner Distillation” (ALD) procedure that leverages a continual form of distillation that transfers learning progress from a large capacity learner model to a small capacity actor model. As a case study, we develop this procedure in the context of partially-observable environments, where transformer models have had large improvements over LSTMs recently, at the cost of significantly higher computational complexity. With transformer models as the learner and LSTMs as the actor, we demonstrate in several challenging memory environments that using Actor-Learner Distillation recovers the clear sample-efficiency gains of the transformer learner model while maintaining the fast inference and reduced total training time of the LSTM actor model.
Efficient Transformers in Reinforcement Learning using Actor-Learner Distillation
— AK (@ak92501) April 6, 2021
pdf: https://t.co/ncLWJgbf6d
abs: https://t.co/ozkZ6WgNRl pic.twitter.com/N2bJv4kd6j
10. SPGISpeech: 5,000 hours of transcribed financial audio for fully formatted end-to-end speech recognition
Patrick K. O’Neill, Vitaly Lavrukhin, Somshubra Majumdar, Vahid Noroozi, Yuekai Zhang, Oleksii Kuchaiev, Jagadeesh Balam, Yuliya Dovzhenko, Keenan Freyberg, Michael D. Shulman, Boris Ginsburg, Shinji Watanabe, Georg Kucsko
In the English speech-to-text (STT) machine learning task, acoustic models are conventionally trained on uncased Latin characters, and any necessary orthography (such as capitalization, punctuation, and denormalization of non-standard words) is imputed by separate post-processing models. This adds complexity and limits performance, as many formatting tasks benefit from semantic information present in the acoustic signal but absent in transcription. Here we propose a new STT task: end-to-end neural transcription with fully formatted text for target labels. We present baseline Conformer-based models trained on a corpus of 5,000 hours of professionally transcribed earnings calls, achieving a CER of 1.7. As a contribution to the STT research community, we release the corpus free for non-commercial use at https://datasets.kensho.com/datasets/scribe.
SPGISpeech: 5,000 hours of transcribed financial audio for fully formatted end-to-end speech recognition
— AK (@ak92501) April 6, 2021
pdf: https://t.co/wcecmEyIpf
abs: https://t.co/Im6u55i8dx
project page: https://t.co/r8giPxG440 pic.twitter.com/sUtGFAoKYg
11. New Benchmarks for Learning on Non-Homophilous Graphs
Derek Lim, Xiuyu Li, Felix Hohne, Ser-Nam Lim
Much data with graph structures satisfy the principle of homophily, meaning that connected nodes tend to be similar with respect to a specific attribute. As such, ubiquitous datasets for graph machine learning tasks have generally been highly homophilous, rewarding methods that leverage homophily as an inductive bias. Recent work has pointed out this particular focus, as new non-homophilous datasets have been introduced and graph representation learning models better suited for low-homophily settings have been developed. However, these datasets are small and poorly suited to truly testing the effectiveness of new methods in non-homophilous settings. We present a series of improved graph datasets with node label relationships that do not satisfy the homophily principle. Along with this, we introduce a new measure of the presence or absence of homophily that is better suited than existing measures in different regimes. We benchmark a range of simple methods and graph neural networks across our proposed datasets, drawing new insights for further research. Data and codes can be found at https://github.com/CUAI/Non-Homophily-Benchmarks.
12. Tukey Depths and Hamilton-Jacobi Differential Equations
Martin Molina-Fructuoso, Ryan Murray
- retweets: 33, favorites: 46 (04/07/2021 10:15:29)
- links: abs | pdf
- math.ST | cs.LG | math.AP | stat.ML
The widespread application of modern machine learning has increased the need for robust statistical algorithms. This work studies one such fundamental statistical measure known as the Tukey depth. We study the problem in the continuum (population) limit. In particular, we derive the associated necessary conditions, which take the form of a first-order partial differential equation. We discuss the classical interpretation of this necessary condition as the viscosity solution of a Hamilton-Jacobi equation, but with a non-classical Hamiltonian with discontinuous dependence on the gradient at zero. We prove that this equation possesses a unique viscosity solution and that this solution always bounds the Tukey depth from below. In certain cases, we prove that the Tukey depth is equal to the viscosity solution, and we give some illustrations of standard numerical methods from the optimal control community which deal directly with the partial differential equation. We conclude by outlining several promising research directions both in terms of new numerical algorithms and theoretical challenges.
an unexpected connection: https://t.co/KcFqqFRQ2D shows that the Tukey depth (related to the Tukey median, a generalisation of the median to d > 1 which retains good statistical + robustness properties) of a distribution can be characterised in terms of a Hamilton-Jacobi PDE!
— Sam Power (@sam_power_825) April 6, 2021
13. Deep Learning of Conjugate Mappings
Jason J. Bramburger, Steven L. Brunton, J. Nathan Kutz
Despite many of the most common chaotic dynamical systems being continuous in time, it is through discrete time mappings that much of the understanding of chaos is formed. Henri Poincar’e first made this connection by tracking consecutive iterations of the continuous flow with a lower-dimensional, transverse subspace. The mapping that iterates the dynamics through consecutive intersections of the flow with the subspace is now referred to as a Poincar’e map, and it is the primary method available for interpreting and classifying chaotic dynamics. Unfortunately, in all but the simplest systems, an explicit form for such a mapping remains outstanding. This work proposes a method for obtaining explicit Poincar’e mappings by using deep learning to construct an invertible coordinate transformation into a conjugate representation where the dynamics are governed by a relatively simple chaotic mapping. The invertible change of variable is based on an autoencoder, which allows for dimensionality reduction, and has the advantage of classifying chaotic systems using the equivalence relation of topological conjugacies. Indeed, the enforcement of topological conjugacies is the critical neural network regularization for learning the coordinate and dynamics pairing. We provide expository applications of the method to low-dimensional systems such as the R”ossler and Lorenz systems, while also demonstrating the utility of the method on infinite-dimensional systems, such as the Kuramoto—Sivashinsky equation.
Excited to share some new work with @eigensteve and J.N. Kutz. We leverage deep learning to discover low-dimensional Poincare mappings that govern chaotic systems. We even applied it to the Kuramoto-Sivashinsky PDE!https://t.co/SQOLRtM7rc
— Jason J. Bramburger (@jbramburger7) April 6, 2021