1. Bottleneck Transformers for Visual Recognition
Aravind Srinivas, Tsung-Yi Lin, Niki Parmar, Jonathon Shlens, Pieter Abbeel, Ashish Vaswani
We present BoTNet, a conceptually simple yet powerful backbone architecture that incorporates self-attention for multiple computer vision tasks including image classification, object detection and instance segmentation. By just replacing the spatial convolutions with global self-attention in the final three bottleneck blocks of a ResNet and no other changes, our approach improves upon the baselines significantly on instance segmentation and object detection while also reducing the parameters, with minimal overhead in latency. Through the design of BoTNet, we also point out how ResNet bottleneck blocks with self-attention can be viewed as Transformer blocks. Without any bells and whistles, BoTNet achieves 44.4% Mask AP and 49.7% Box AP on the COCO Instance Segmentation benchmark using the Mask R-CNN framework; surpassing the previous best published single model and single scale results of ResNeSt evaluated on the COCO validation set. Finally, we present a simple adaptation of the BoTNet design for image classification, resulting in models that achieve a strong performance of 84.7% top-1 accuracy on the ImageNet benchmark while being up to 2.33x faster in compute time than the popular EfficientNet models on TPU-v3 hardware. We hope our simple and effective approach will serve as a strong baseline for future research in self-attention models for vision.
Bottleneck Transformers for Visual Recognition
— Aran Komatsuzaki (@arankomatsuzaki) January 28, 2021
SotA visual recognition model with convolution+attention that outperforms EfficientNet and DeiT in terms of performance-computes trade-off. https://t.co/pqEsfwDLYC pic.twitter.com/GXoQLEQYy5
Bottleneck Transformers for Visual Recognition
— AK (@ak92501) January 28, 2021
pdf: https://t.co/nMiQYSrd3X
abs: https://t.co/MMLWCtgdSP pic.twitter.com/CSjNBm6gnL
2. Identification of brain states, transitions, and communities using functional MRI
Lingbin Bian, Tiangang Cui, B.T. Thomas Yeo, Alex Fornito, Adeel Razi, Jonathan Keith
Brain function relies on a precisely coordinated and dynamic balance between the functional integration and segregation of distinct neural systems. Characterizing the way in which neural systems reconfigure their interactions to give rise to distinct but hidden brain states remains an open challenge. In this paper, we propose a Bayesian model-based characterization of latent brain states and showcase a novel method based on posterior predictive discrepancy using the latent block model to detect transitions between latent brain states in blood oxygen level-dependent (BOLD) time series. The set of estimated parameters in the model includes a latent label vector that assigns network nodes to communities, and also block model parameters that reflect the weighted connectivity within and between communities. Besides extensive in-silico model evaluation, we also provide empirical validation (and replication) using the Human Connectome Project (HCP) dataset of 100 healthy adults. Our results obtained through an analysis of task-fMRI data during working memory performance show appropriate lags between external task demands and change-points between brain states, with distinctive community patterns distinguishing fixation, low-demand and high-demand task conditions.
New preprint from our lab:
— Adeel Razi (@adeelrazi) January 27, 2021
"Identification of brain states, transitions, and communities using functional MRI". https://t.co/nSIbwdDhTS
Work lead by the brilliant PhD student Lingbin Bian @LBIAN5 in collaboration w/ Tiangang Cui, @bttyeo @AFornito & @ThatNinjaHoke 1/6 pic.twitter.com/ZSDZ7pBsF6
3. In-IDE Code Generation from Natural Language: Promise and Challenges
Frank F. Xu, Bogdan Vasilescu, Graham Neubig
A great part of software development involves conceptualizing or communicating the underlying procedures and logic that needs to be expressed in programs. One major difficulty of programming is turning concept into code, especially when dealing with the APIs of unfamiliar libraries. Recently, there has been a proliferation of machine learning methods for code generation and retrieval from natural language queries, but these have primarily been evaluated purely based on retrieval accuracy or overlap of generated code with developer-written code, and the actual effect of these methods on the developer workflow is surprisingly unattested. We perform the first comprehensive investigation of the promise and challenges of using such technology inside the IDE, asking “at the current state of technology does it improve developer productivity or accuracy, how does it affect the developer experience, and what are the remaining gaps and challenges?” We first develop a plugin for the IDE that implements a hybrid of code generation and code retrieval functionality, and orchestrate virtual environments to enable collection of many user events. We ask developers with various backgrounds to complete 14 Python programming tasks ranging from basic file manipulation to machine learning or data visualization, with or without the help of the plugin. While qualitative surveys of developer experience are largely positive, quantitative results with regards to increased productivity, code quality, or program correctness are inconclusive. Analysis identifies several pain points that could improve the effectiveness of future machine learning based code generation/retrieval developer assistants, and demonstrates when developers prefer code generation over code retrieval and vice versa. We release all data and software to pave the road for future empirical studies and development of better models.
There has been much interest in ML methods that generate source code (e.g. Python) from English commands. But does this actually help software developers? We asked 31 developers to use a code generation plugin, and found some interesting results: https://t.co/ifiG3EYK3J 1/7 pic.twitter.com/u0mgY0LSSj
— Graham Neubig (@gneubig) January 28, 2021
4. Hiding Behind Machines: When Blame Is Shifted to Artificial Agents
Till Feier, Jan Gogoll, Matthias Uhl
The transfer of tasks with sometimes far-reaching moral implications to autonomous systems raises a number of ethical questions. In addition to fundamental questions about the moral agency of these systems, behavioral issues arise. This article focuses on the responsibility of agents who decide on our behalf. We investigate the empirically accessible question of whether the production of moral outcomes by an agent is systematically judged differently when the agent is artificial and not human. The results of a laboratory experiment suggest that decision-makers can actually rid themselves of guilt more easily by delegating to machines than by delegating to other people. Our results imply that the availability of artificial agents could provide stronger incentives for decision makers to delegate morally sensitive decisions.
"The results ... suggest that decision-makers can actually rid themselves of guilt more easily by delegating to machines than by delegating to other people." 😬
— Miles Brundage (@Miles_Brundage) January 28, 2021
"Hiding Behind Machines: When Blame Is Shifted to Artificial Agents," Feier et al.: https://t.co/fpNTNdkY7A
5. Automatic Comic Generation with Stylistic Multi-page Layouts and Emotion-driven Text Balloon Generation
Xin Yang, Zongliang Ma, Letian Yu, Ying Cao, Baocai Yin, Xiaopeng Wei, Qiang Zhang, Rynson W.H. Lau
In this paper, we propose a fully automatic system for generating comic books from videos without any human intervention. Given an input video along with its subtitles, our approach first extracts informative keyframes by analyzing the subtitles, and stylizes keyframes into comic-style images. Then, we propose a novel automatic multi-page layout framework, which can allocate the images across multiple pages and synthesize visually interesting layouts based on the rich semantics of the images (e.g., importance and inter-image relation). Finally, as opposed to using the same type of balloon as in previous works, we propose an emotion-aware balloon generation method to create different types of word balloons by analyzing the emotion of subtitles and audios. Our method is able to vary balloon shapes and word sizes in balloons in response to different emotions, leading to more enriched reading experience. Once the balloons are generated, they are placed adjacent to their corresponding speakers via speaker detection. Our results show that our method, without requiring any user inputs, can generate high-quality comic pages with visually rich layouts and balloons. Our user studies also demonstrate that users prefer our generated results over those by state-of-the-art comic generation systems.
Automatic Comic Generation with Stylistic Multi-page
— AK (@ak92501) January 28, 2021
Layouts and Emotion-driven Text Balloon Generation
pdf: https://t.co/RESHiBPtTz
abs: https://t.co/YWWrB0iVUS pic.twitter.com/UAuOQdRmF5
6. LDLE: Low Distortion Local Eigenmaps
Dhruv Kohli, Alexander Cloninger, Gal Mishne
- retweets: 212, favorites: 63 (01/29/2021 09:20:02)
- links: abs | pdf
- math.SP | cs.LG | math.AP | stat.ML
We present Low Distortion Local Eigenmaps (LDLE), a manifold learning technique which constructs a set of low distortion local views of a dataset in lower dimension and registers them to obtain a global embedding. The local views are constructed using the global eigenvectors of the graph Laplacian and are registered using Procrustes analysis. The choice of these eigenvectors may vary across the regions. In contrast to existing techniques, LDLE is more geometric and can embed manifolds without boundary as well as non-orientable manifolds into their intrinsic dimension.
New preprint: LDLE: Low Distortion Local Eigenmapshttps://t.co/1JjZK2gG8y
— Gal Mishne (@gmishne) January 28, 2021
We present a bottom-up manifold learning approach such that distances in the embedding space have minimal distortion
w/ (twitterless) Dhruv Kohli and Alex Cloninger
A 🧵 (1/5) pic.twitter.com/MzAAIZbsig
7. Muppet: Massive Multi-task Representations with Pre-Finetuning
Armen Aghajanyan, Anchit Gupta, Akshat Shrivastava, Xilun Chen, Luke Zettlemoyer, Sonal Gupta
We propose pre-finetuning, an additional large-scale learning stage between language model pre-training and fine-tuning. Pre-finetuning is massively multi-task learning (around 50 datasets, over 4.8 million total labeled examples), and is designed to encourage learning of representations that generalize better to many different tasks. We show that pre-finetuning consistently improves performance for pretrained discriminators (e.g.~RoBERTa) and generation models (e.g.~BART) on a wide range of tasks (sentence prediction, commonsense reasoning, MRC, etc.), while also significantly improving sample efficiency during fine-tuning. We also show that large-scale multi-tasking is crucial; pre-finetuning can hurt performance when few tasks are used up until a critical point (usually above 15) after which performance improves linearly in the number of tasks.
I'm happy to present our new paper MUPPET (https://t.co/egLCMnYYvj), arguing for an additional stage between pre-training and fine-tuning, called pre-finetuning which uses massively multi-task learning (~50 tasks) to further refine representations.
— Armen (@ArmenAgha) January 28, 2021
Muppet: Massive Multi-task Representations with Pre-Finetuning
— Aran Komatsuzaki (@arankomatsuzaki) January 28, 2021
Pretraining a pretrained LM with a massive labeled dataset leads to substantial improvement in fine-tuning. https://t.co/zdMpKBeb1O pic.twitter.com/oIEvwDmCuO
8. VisualMRC: Machine Reading Comprehension on Document Images
Ryota Tanaka, Kyosuke Nishida, Sen Yoshida
Recent studies on machine reading comprehension have focused on text-level understanding but have not yet reached the level of human understanding of the visual layout and content of real-world documents. In this study, we introduce a new visual machine reading comprehension dataset, named VisualMRC, wherein given a question and a document image, a machine reads and comprehends texts in the image to answer the question in natural language. Compared with existing visual question answering (VQA) datasets that contain texts in images, VisualMRC focuses more on developing natural language understanding and generation abilities. It contains 30,000+ pairs of a question and an abstractive answer for 10,000+ document images sourced from multiple domains of webpages. We also introduce a new model that extends existing sequence-to-sequence models, pre-trained with large-scale text corpora, to take into account the visual layout and content of documents. Experiments with VisualMRC show that this model outperformed the base sequence-to-sequence models and a state-of-the-art VQA model. However, its performance is still below that of humans on most automatic evaluation metrics. The dataset will facilitate research aimed at connecting vision and language understanding.
Our #AAAI2021 paper with @kyoun is out on arXiv! https://t.co/8I3JrMQt40
— Ryota Tanaka (@rtanaka_lab) January 28, 2021
We introduce VisualMRC that requires a system to read and reason about text in the document image. We propose new models that allow for transferring the abilities of pre-trained seq2seq models, to this task. pic.twitter.com/7tyCY3zVaO
#AAAI2021 に採択された京介さん@kyoun との共著論文をarXivにアップしました.https://t.co/8I3JrMQt40
— Ryota Tanaka (@rtanaka_lab) January 28, 2021
ドキュメント画像について視覚的に質問応答を行うタスク (VisualMRC)を提案しました.さらに,事前学習seq2seqモデルを利用してこのタスクに適応可能なモデルの提案も行いました. pic.twitter.com/yl6BHB8O8y
9. QFold: Quantum Walks and Deep Learning to Solve Protein Folding
P A M Casares, Roberto Campos, M A Martin-Delgado
- retweets: 131, favorites: 39 (01/29/2021 09:20:02)
- links: abs | pdf
- quant-ph | cond-mat.stat-mech | cs.LG
We develop quantum computational tools to predict how proteins fold in 3D, one of the most important problems in current biochemical research. We explain how to combine recent deep learning advances with the well known technique of quantum walks applied to a Metropolis algorithm. The result, QFold, is a fully scalable hybrid quantum algorithm that in contrast to previous quantum approaches does not require a lattice model simplification and instead relies on the much more realistic assumption of parameterization in terms of torsion angles of the amino acids. We compare it with its classical analog for different annealing schedules and find a polynomial quantum advantage, and validate a proof-of-concept realization of the quantum Metropolis in IBMQ Casablanca quantum processor.
10. Mining Large-Scale Low-Resource Pronunciation Data From Wikipedia
Tania Chakraborty, Manasa Prasad, Theresa Breiner, Sandy Ritchie, Daan van Esch
Pronunciation modeling is a key task for building speech technology in new languages, and while solid grapheme-to-phoneme (G2P) mapping systems exist, language coverage can stand to be improved. The information needed to build G2P models for many more languages can easily be found on Wikipedia, but unfortunately, it is stored in disparate formats. We report on a system we built to mine a pronunciation data set in 819 languages from loosely structured tables within Wikipedia. The data includes phoneme inventories, and for 63 low-resource languages, also includes the grapheme-to-phoneme (G2P) mapping. 54 of these languages do not have easily findable G2P mappings online otherwise. We turned the information from Wikipedia into a structured, machine-readable TSV format, and make the resulting data set publicly available so it can be improved further and used in a variety of applications involving low-resource languages.
Can we use the knowledge in Wikipedia articles to help speech technology learn how to pronounce a new language? Yes and no! See this tech report which we just posted to arXiv: https://t.co/BqVVZyeS9X or read this short thread:
— Daan van Esch (@daanvanesch) January 28, 2021
11. A Convolutional Neural Network based Cascade Reconstruction for the IceCube Neutrino Observatory
R. Abbasi, M. Ackermann, J. Adams, J. A. Aguilar, M. Ahlers, M. Ahrens, C. Alispach, A. A. Alves Jr., N. M. Amin, R. An, K. Andeen, T. Anderson, I. Ansseau, G. Anton, C. Argüelles, S. Axani, X. Bai, A. Balagopal V., A. Barbano, S. W. Barwick, B. Bastian, V. Basu, V. Baum, S. Baur, R. Bay, J. J. Beatty, K.-H. Becker, J. Becker Tjus, C. Bellenghi, S. BenZvi, D. Berley, E. Bernardini, D. Z. Besson, G. Binder, D. Bindig, E. Blaufuss, S. Blot, S. Böser, O. Botner, J. Böttcher, E. Bourbeau, J. Bourbeau, F. Bradascio, J. Braun, S. Bron, J. Brostean-Kaiser, A. Burgman, R. S. Busse, M. A. Campana, C. Chen, D. Chirkin, S. Choi, B. A. Clark, K. Clark, L. Classen, A. Coleman, G. H. Collin, J. M. Conrad, P. Coppin, P. Correa, D. F. Cowen, R. Cross, P. Dave, C. De Clercq, J. J. DeLaunay
Continued improvements on existing reconstruction methods are vital to the success of high-energy physics experiments, such as the IceCube Neutrino Observatory. In IceCube, further challenges arise as the detector is situated at the geographic South Pole where computational resources are limited. However, to perform real-time analyses and to issue alerts to telescopes around the world, powerful and fast reconstruction methods are desired. Deep neural networks can be extremely powerful, and their usage is computationally inexpensive once the networks are trained. These characteristics make a deep learning-based approach an excellent candidate for the application in IceCube. A reconstruction method based on convolutional architectures and hexagonally shaped kernels is presented. The presented method is robust towards systematic uncertainties in the simulation and has been tested on experimental data. In comparison to standard reconstruction methods in IceCube, it can improve upon the reconstruction accuracy, while reducing the time necessary to run the reconstruction by two to three orders of magnitude.
303 additional authors not shown 😮https://t.co/Ajc49fdefJ pic.twitter.com/tzkjmRaJxQ
— AK (@ak92501) January 28, 2021
12. Operads for complex system design specification, analysis and synthesis
John D. Foley, Spencer Breiner, Eswaran Subrahmanian, John M. Dusel
As the complexity and heterogeneity of a system grows, the challenge of specifying, documenting and synthesizing correct, machine readable designs increases dramatically. Separation of the system into manageable parts is needed to support analysis at various levels of granularity so that the system is maintainable and adaptable over its life cycle. In this paper, we argue that operads provide an effective knowledge representation to address these challenges. Formal documentation of a syntactically correct complex design is built up during design synthesis, while semantic reasoning about which designs are effective guides the process. Throughout, the ability to break down the system into parts and reconstitute the whole is maintained. We describe recent progress in effective modeling under this paradigm and directions for future work to systematically address scalability challenges for complex system design.
“Operads for complex system design specification, analysis and synthesis” by John Foley, Spencer Breiner, Eswaran Subrahmanian and John Duselhttps://t.co/Zk1GIgXqpd
— julesh (@_julesh_) January 28, 2021
Been waiting for this paper for a couple of years, since I first saw this stuff in some form pic.twitter.com/TJK15LmCyv
13. Quantum machine learning models are kernel methods
Maria Schuld
With near-term quantum devices available and the race for fault-tolerant quantum computers in full swing, researchers became interested in the question of what happens if we replace a machine learning model with a quantum circuit. While such “quantum models” are sometimes called “quantum neural networks”, it has been repeatedly noted that their mathematical structure is actually much more closely related to kernel methods: they analyse data in high-dimensional Hilbert spaces to which we only have access through inner products revealed by measurements. This technical manuscript summarises, formalises and extends the link by systematically rephrasing quantum models as a kernel method. It shows that most near-term and fault-tolerant quantum models can be replaced by a general support vector machine whose kernel computes distances between data-encoding quantum states. In particular, kernel-based training is guaranteed to find better or equally good quantum models than variational circuit training. Overall, the kernel perspective of quantum machine learning tells us that the way that data is encoded into quantum states is the main ingredient that can potentially set quantum models apart from classical machine learning models.
14. Boosting Segmentation Performance across datasets using histogram specification with application to pelvic bone segmentation
Prabhakara Subramanya Jois, Aniketh Manjunath, Thomas Fevens
Accurate segmentation of the pelvic CTs is crucial for the clinical diagnosis of pelvic bone diseases and for planning patient-specific hip surgeries. With the emergence and advancements of deep learning for digital healthcare, several methodologies have been proposed for such segmentation tasks. But in a low data scenario, the lack of abundant data needed to train a Deep Neural Network is a significant bottle-neck. In this work, we propose a methodology based on modulation of image tonal distributions and deep learning to boost the performance of networks trained on limited data. The strategy involves pre-processing of test data through histogram specification. This simple yet effective approach can be viewed as a style transfer methodology. The segmentation task uses a U-Net configuration with an EfficientNet-B0 backbone, optimized using an augmented BCE-IoU loss function. This configuration is validated on a total of 284 images taken from two publicly available CT datasets, TCIA (a cancer imaging archive) and the Visible Human Project. The average performance measures for the Dice coefficient and Intersection over Union are 95.7% and 91.9%, respectively, give strong evidence for the effectiveness of the approach, which is highly competitive with state-of-the-art methodologies.