1. A Survey of Data Augmentation Approaches for NLP
Steven Y. Feng, Varun Gangal, Jason Wei, Sarath Chandar, Soroush Vosoughi, Teruko Mitamura, Eduard Hovy
Data augmentation has recently seen increased interest in NLP due to more work in low-resource domains, new tasks, and the popularity of large-scale neural networks that require large amounts of training data. Despite this recent upsurge, this area is still relatively underexplored, perhaps due to the challenges posed by the discrete nature of language data. In this paper, we present a comprehensive and unifying survey of data augmentation for NLP by summarizing the literature in a structured manner. We first introduce and motivate data augmentation for NLP, and then discuss major methodologically representative approaches. Next, we highlight techniques that are used for popular NLP applications and tasks. We conclude by outlining current challenges and directions for future research. Overall, our paper aims to clarify the landscape of existing literature in data augmentation for NLP and motivate additional work in this area.
Data augmentation is useful in machine learning as more needs for increasing data diversity emerge.
— elvis (@omarsar0) May 10, 2021
It's nice to see this recent survey paper summarizing the latest data augmentation approaches for NLP.
A must-read for ML/NLP students and practitioners.https://t.co/djbnXyHRGT pic.twitter.com/PYYdqkbzVX
2. Are Pre-trained Convolutions Better than Pre-trained Transformers?
Yi Tay, Mostafa Dehghani, Jai Gupta, Dara Bahri, Vamsi Aribandi, Zhen Qin, Donald Metzler
In the era of pre-trained language models, Transformers are the de facto choice of model architectures. While recent research has shown promise in entirely convolutional, or CNN, architectures, they have not been explored using the pre-train-fine-tune paradigm. In the context of language models, are convolutional models competitive to Transformers when pre-trained? This paper investigates this research question and presents several interesting findings. Across an extensive set of experiments on 8 datasets/tasks, we find that CNN-based pre-trained models are competitive and outperform their Transformer counterpart in certain scenarios, albeit with caveats. Overall, the findings outlined in this paper suggest that conflating pre-training and architectural advances is misguided and that both advances should be considered independently. We believe our research paves the way for a healthy amount of optimism in alternative architectures.
2月:「Transformer is All you Need!」https://t.co/XGmFWuxAgF
— えるエル (@ImAI_Eruel) May 10, 2021
3月:「Attention is not All you Need!」https://t.co/4lByFg3EhM
今日:「Convolutionの方がTransformerより強いのでは?」https://t.co/Q9Yz6wYgF0
Transformer周りの界隈,魔境すぎでは
Are Pre-trained Convolutions Better than Pre-trained Transformers?
— AK (@ak92501) May 10, 2021
pdf: https://t.co/8L06XiPM1C
abs: https://t.co/gIAq2Od5GA
experimental results show that convolutions can outperform Transformers in both pretrain and non-pre-trained setups pic.twitter.com/IZtYlBXvkc
3. ResMLP: Feedforward networks for image classification with data-efficient training
Hugo Touvron, Piotr Bojanowski, Mathilde Caron, Matthieu Cord, Alaaeldin El-Nouby, Edouard Grave, Armand Joulin, Gabriel Synnaeve, Jakob Verbeek, Hervé Jégou
We present ResMLP, an architecture built entirely upon multi-layer perceptrons for image classification. It is a simple residual network that alternates (i) a linear layer in which image patches interact, independently and identically across channels, and (ii) a two-layer feed-forward network in which channels interact independently per patch. When trained with a modern training strategy using heavy data-augmentation and optionally distillation, it attains surprisingly good accuracy/complexity trade-offs on ImageNet. We will share our code based on the Timm library and pre-trained models.
ResMLP: Feedforward networks for image classification with data-efficient training
— Aran Komatsuzaki (@arankomatsuzaki) May 10, 2021
- proposes ResMLP, an architecture built entirely upon MLPs for image classification.
- attains surprisingly good accuracy/complexity trade-offs on ImageNet. https://t.co/R9bGmszryB pic.twitter.com/cihkPrbkWy
Repeat after me, another day, another MLP architecturehttps://t.co/XdOGML9Vmy pic.twitter.com/7EtombxELq
— Kosta Derpanis (@CSProfKGD) May 10, 2021
ResMLP: Feedforward networks for image classification with data-efficient training
— AK (@ak92501) May 10, 2021
pdf: https://t.co/0cTkeTwwne
abs: https://t.co/DA1zkiYq0S
an architecture built entirely upon multi-layer perceptrons for image classification pic.twitter.com/7VmEyovIkW
4. What Kinds of Functions do Deep Neural Networks Learn? Insights from Variational Spline Theory
Rahul Parhi, Robert D. Nowak
We develop a variational framework to understand the properties of functions learned by deep neural networks with ReLU activation functions fit to data. We propose a new function space, which is reminiscent of classical bounded variation spaces, that captures the compositional structure associated with deep neural networks. We derive a representer theorem showing that deep ReLU networks are solutions to regularized data fitting problems in this function space. The function space consists of compositions of functions from the (non-reflexive) Banach spaces of second-order bounded variation in the Radon domain. These are Banach spaces with sparsity-promoting norms, giving insight into the role of sparsity in deep neural networks. The neural network solutions have skip connections and rank bounded weight matrices, providing new theoretical support for these common architectural choices. The variational problem we study can be recast as a finite-dimensional neural network training problem with regularization schemes related to the notions of weight decay and path-norm regularization. Finally, our analysis builds on techniques from variational spline theory, providing new connections between deep neural networks and splines.
What kinds of functions do deep neural networks learn? Now we know!https://t.co/MHuSPPMd23
— Rob Nowak (@rdnowak) May 10, 2021
Our paper presents a new representer theorem for deep ReLU networks and provides theoretical insights into weight decay, sparsity, skip connections, and low-rank weight matrices.
5. Learning Controllable Content Generators
Sam Earle, Maria Edwards, Ahmed Khalifa, Philip Bontrager, Julian Togelius
It has recently been shown that reinforcement learning can be used to train generators capable of producing high-quality game levels, with quality defined in terms of some user-specified heuristic. To ensure that these generators’ output is sufficiently diverse (that is, not amounting to the reproduction of a single optimal level configuration), the generation process is constrained such that the initial seed results in some variance in the generator’s output. However, this results in a loss of control over the generated content for the human user. We propose to train generators capable of producing controllably diverse output, by making them “goal-aware.” To this end, we add conditional inputs representing how close a generator is to some heuristic, and also modify the reward mechanism to incorporate that value. Testing on multiple domains, we show that the resulting level generators are capable of exploring the space of possible levels in a targeted, controllable manner, producing levels of comparable quality as their goal-unaware counterparts, that are diverse along designer-specified dimensions.
In "Learning Controllable Content Generators," with Maria Edwards, @Amidos2006, @FilipoGiovanni, and @togelius, we train an RL agent to generate game levels with specific features by feeding the agent targets as input and rewarding it for approaching them.https://t.co/VaKxX3TVMz pic.twitter.com/5YhmSvkn0l
— Sam Earle (@Smearle_RH) May 10, 2021
Learning Controllable Content Generators
— AK (@ak92501) May 10, 2021
pdf: https://t.co/SkkFy1SlRX
abs: https://t.co/qStOBsLM7R
resulting level generators explore the space of possible levels in a targeted, controllable manner, producing levels of comparable quality as their goal-unaware counterparts pic.twitter.com/8NJZi7b9o9
6. LASR: Learning Articulated Shape Reconstruction from a Monocular Video
Gengshan Yang, Deqing Sun, Varun Jampani, Daniel Vlasic, Forrester Cole, Huiwen Chang, Deva Ramanan, William T. Freeman, Ce Liu
Remarkable progress has been made in 3D reconstruction of rigid structures from a video or a collection of images. However, it is still challenging to reconstruct nonrigid structures from RGB inputs, due to its under-constrained nature. While template-based approaches, such as parametric shape models, have achieved great success in modeling the “closed world” of known object categories, they cannot well handle the “open-world” of novel object categories or outlier shapes. In this work, we introduce a template-free approach to learn 3D shapes from a single video. It adopts an analysis-by-synthesis strategy that forward-renders object silhouette, optical flow, and pixel values to compare with video observations, which generates gradients to adjust the camera, shape and motion parameters. Without using a category-specific shape template, our method faithfully reconstructs nonrigid 3D structures from videos of human, animals, and objects of unknown classes. Code will be available at lasr-google.github.io .
LASR: Learning Articulated Shape Reconstruction from a Monocular Video
— AK (@ak92501) May 10, 2021
pdf: https://t.co/0oB93bnY6j
abs: https://t.co/gi1lfVL1q4
project page: https://t.co/AK9BxWgMqa pic.twitter.com/gUforGguA6
7. Contrastive Learning for Unsupervised Image-to-Image Translation
Hanbit Lee, Jinseok Seol, Sang-goo Lee
Image-to-image translation aims to learn a mapping between different groups of visually distinguishable images. While recent methods have shown impressive ability to change even intricate appearance of images, they still rely on domain labels in training a model to distinguish between distinct visual features. Such dependency on labels often significantly limits the scope of applications since consistent and high-quality labels are expensive. Instead, we wish to capture visual features from images themselves and apply them to enable realistic translation without human-generated labels. To this end, we propose an unsupervised image-to-image translation method based on contrastive learning. The key idea is to learn a discriminator that differentiates between distinctive styles and let the discriminator supervise a generator to transfer those styles across images. During training, we randomly sample a pair of images and train the generator to change the appearance of one towards another while keeping the original structure. Experimental results show that our method outperforms the leading unsupervised baselines in terms of visual quality and translation accuracy.
Contrastive Learning for Unsupervised Image-to-Image Translation
— AK (@ak92501) May 10, 2021
pdf: https://t.co/tiRPgcY0ic
abs: https://t.co/OoyrZ6ERqQ
learn a discriminator that differentiates between distinctive styles and let the discriminator supervise a generator to transfer those styles across images pic.twitter.com/PigLyp2795
아카이브에 논문 뭐 하나 올림 https://t.co/X964maiUrb
— 제이미 (@theeluwin) May 10, 2021
8. Structured dataset documentation: a datasheet for CheXpert
Christian Garbin, Pranav Rajpurkar, Jeremy Irvin, Matthew P. Lungren, Oge Marques
Billions of X-ray images are taken worldwide each year. Machine learning, and deep learning in particular, has shown potential to help radiologists triage and diagnose images. However, deep learning requires large datasets with reliable labels. The CheXpert dataset was created with the participation of board-certified radiologists, resulting in the strong ground truth needed to train deep learning networks. Following the structured format of Datasheets for Datasets, this paper expands on the original CheXpert paper and other sources to show the critical role played by radiologists in the creation of reliable labels and to describe the different aspects of the dataset composition in detail. Such structured documentation intends to increase the awareness in the machine learning and medical communities of the strengths, applications, and evolution of CheXpert, thereby advancing the field of medical image analysis. Another objective of this paper is to put forward this dataset datasheet as an example to the community of how to create detailed and structured descriptions of datasets. We believe that clearly documenting the creation process, the contents, and applications of datasets accelerates the creation of useful and reliable models.
9. Hierarchical Graph Neural Networks
Stanislav Sobolevsky
- retweets: 42, favorites: 43 (05/11/2021 09:06:37)
- links: abs | pdf
- cs.LG | cs.AI | math.CO | physics.data-an
Over the recent years, Graph Neural Networks have become increasingly popular in network analytic and beyond. With that, their architecture noticeable diverges from the classical multi-layered hierarchical organization of the traditional neural networks. At the same time, many conventional approaches in network science efficiently utilize the hierarchical approaches to account for the hierarchical organization of the networks, and recent works emphasize their critical importance. This paper aims to connect the dots between the traditional Neural Network and the Graph Neural Network architectures as well as the network science approaches, harnessing the power of the hierarchical network organization. A Hierarchical Graph Neural Network architecture is proposed, supplementing the original input network layer with the hierarchy of auxiliary network layers and organizing the computational scheme updating the node features through both - horizontal network connections within each layer as well as the vertical connection between the layers. It enables simultaneous learning of the individual node features along with the aggregated network features at variable resolution and uses them to improve the convergence and stability of the individual node feature learning. The proposed Hierarchical Graph Neural network architecture is successfully evaluated on the network embedding and modeling as well as network classification, node labeling, and community tasks and demonstrates increased efficiency in those.
Over the recent years, Graph #NeuralNetworks have become increasingly popular
— The Year of the Graph (@TheYotg) May 10, 2021
Hierarchical Graph Neural Networks aim to connect the dots between traditional Neural Networks & GNN architectures as well as the network science approaches#EmergingTech #AIhttps://t.co/S2quI2tpQi pic.twitter.com/eSvJ9XPkZU
10. Emergence in artificial life
Carlos Gershenson
- retweets: 44, favorites: 35 (05/11/2021 09:06:37)
- links: abs | pdf
- physics.gen-ph | cs.AI
Concepts similar to emergence have been used since antiquity, but we lack an agreed definition of emergence. Still, emergence has been identified as one of the features of complex systems. Most would agree on the statement “life is complex”. Thus, understanding emergence and complexity should benefit the study of living systems. It can be said that life emerges from the interactions of complex molecules. But how useful is this to understand living systems? Artificial life (ALife) has been developed in recent decades to study life using a synthetic approach: build it to understand it. ALife systems are not so complex, be them soft (simulations), hard (robots), or wet (protocells). Then, we can aim at first understanding emergence in ALife, for then using this knowledge in biology. I argue that to understand emergence and life, it becomes useful to use information as a framework. In a general sense, emergence can be defined as information that is not present at one scale but is present at another scale. This perspective avoids problems of studying emergence from a materialistic framework, and can be useful to study self-organization and complexity.
Emergence in artificial life.
— Carlos Gershenson (@cgershen) May 10, 2021
My latest preprint.
Emergence can be defined as information that is not present at one scale but is present at another scale.
I explore how can this help our understanding of (artificial) life.https://t.co/KQY08jkyJO
11. On-the-Fly Controlled Text Generation with Experts and Anti-Experts
Alisa Liu, Maarten Sap, Ximing Lu, Swabha Swayamdipta, Chandra Bhagavatula, Noah A. Smith, Yejin Choi
Despite recent advances in natural language generation, it remains challenging to control attributes of generated text. We propose DExperts: Decoding-time Experts, a decoding-time method for controlled text generation which combines a pretrained language model with experts and/or anti-experts in an ensemble of language models. Intuitively, under our ensemble, output tokens only get high probability if they are considered likely by the experts, and unlikely by the anti-experts. We apply DExperts to language detoxification and sentiment-controlled generation, where we outperform existing controllable generation methods on both automatic and human evaluations. Our work highlights the promise of using LMs trained on text with (un)desired attributes for efficient decoding-time controlled language generation.
On-the-Fly Controlled Text Generation with Experts and Anti-Experts
— AK (@ak92501) May 10, 2021
pdf: https://t.co/etzYuAzYo5
abs: https://t.co/0MonKgtPRm
a method of ensembling language models at decoding time to control an attribute of generated text pic.twitter.com/nCjuHbVaCP
12. A Dataset of Information-Seeking Questions and Answers Anchored in Research Papers
Pradeep Dasigi, Kyle Lo, Iz Beltagy, Arman Cohan, Noah A. Smith, Matt Gardner
Readers of academic research papers often read with the goal of answering specific questions. Question Answering systems that can answer those questions can make consumption of the content much more efficient. However, building such tools requires data that reflect the difficulty of the task arising from complex reasoning about claims made in multiple parts of a paper. In contrast, existing information-seeking question answering datasets usually contain questions about generic factoid-type information. We therefore present QASPER, a dataset of 5,049 questions over 1,585 Natural Language Processing papers. Each question is written by an NLP practitioner who read only the title and abstract of the corresponding paper, and the question seeks information present in the full text. The questions are then answered by a separate set of NLP practitioners who also provide supporting evidence to answers. We find that existing models that do well on other QA tasks do not perform well on answering these questions, underperforming humans by at least 27 F1 points when answering them from entire papers, motivating further research in document-grounded, information-seeking QA, which our dataset is designed to facilitate.
A Dataset of Information-Seeking Questions and Answers Anchored in Research Papers
— AK (@ak92501) May 10, 2021
pdf: https://t.co/RsTV9UeHPq
abs: https://t.co/6g2Xt9vmhW
project page: https://t.co/OYjXg8lCCW
an information-seeking QA dataset over NLP research papers pic.twitter.com/UUZ9MzNUdE
13. SpeechMoE: Scaling to Large Acoustic Models with Dynamic Routing Mixture of Experts
Zhao You, Shulin Feng, Dan Su, Dong Yu
Recently, Mixture of Experts (MoE) based Transformer has shown promising results in many domains. This is largely due to the following advantages of this architecture: firstly, MoE based Transformer can increase model capacity without computational cost increasing both at training and inference time. Besides, MoE based Transformer is a dynamic network which can adapt to the varying complexity of input instances in realworld applications. In this work, we explore the MoE based model for speech recognition, named SpeechMoE. To further control the sparsity of router activation and improve the diversity of gate values, we propose a sparsity L1 loss and a mean importance loss respectively. In addition, a new router architecture is used in SpeechMoE which can simultaneously utilize the information from a shared embedding network and the hierarchical representation of different MoE layers. Experimental results show that SpeechMoE can achieve lower character error rate (CER) with comparable computation cost than traditional static networks, providing 7.0%-23.0% relative CER improvements on four evaluation datasets.
SpeechMoE: Scaling to Large Acoustic Models with Dynamic Routing Mixture of Experts
— Aran Komatsuzaki (@arankomatsuzaki) May 10, 2021
Achieves lower character error rate (CER) w/ comparable computation cost than traditional networks, providing 7.0% ∼ 23.0% relative CER gains on four datasets.https://t.co/yHobg3c6xe pic.twitter.com/wrGVfVA4yA