All Articles

Hot Papers 2021-05-25

1. True Few-Shot Learning with Language Models

Ethan Perez, Douwe Kiela, Kyunghyun Cho

Pretrained language models (LMs) perform well on many tasks even when learning from a few examples, but prior work uses many held-out examples to tune various aspects of learning, such as hyperparameters, training objectives, and natural language templates (“prompts”). Here, we evaluate the few-shot ability of LMs when such held-out examples are unavailable, a setting we call true few-shot learning. We test two model selection criteria, cross-validation and minimum description length, for choosing LM prompts and hyperparameters in the true few-shot setting. On average, both marginally outperform random selection and greatly underperform selection based on held-out examples. Moreover, selection criteria often prefer models that perform significantly worse than randomly-selected ones. We find similar results even when taking into account our uncertainty in a model’s true performance during selection, as well as when varying the amount of computation and number of examples used for selection. Overall, our findings suggest that prior work significantly overestimated the true few-shot ability of LMs given the difficulty of few-shot model selection.

2. Self-Attention Networks Can Process Bounded Hierarchical Languages

Shunyu Yao, Binghui Peng, Christos Papadimitriou, Karthik Narasimhan

Despite their impressive performance in NLP, self-attention networks were recently proved to be limited for processing formal languages with hierarchical structure, such as Dyckk\mathsf{Dyck}_k, the language consisting of well-nested parentheses of kk types. This suggested that natural language can be approximated well with models that are too weak for formal languages, or that the role of hierarchy and recursion in natural language might be limited. We qualify this implication by proving that self-attention networks can process Dyckk,D\mathsf{Dyck}_{k, D}, the subset of Dyckk\mathsf{Dyck}_{k} with depth bounded by DD, which arguably better captures the bounded hierarchical structure of natural language. Specifically, we construct a hard-attention network with D+1D+1 layers and O(logk)O(\log k) memory size (per token per layer) that recognizes Dyckk,D\mathsf{Dyck}_{k, D}, and a soft-attention network with two layers and O(logk)O(\log k) memory size that generates Dyckk,D\mathsf{Dyck}_{k, D}. Experiments show that self-attention networks trained on Dyckk,D\mathsf{Dyck}_{k, D} generalize to longer inputs with near-perfect accuracy, and also verify the theoretical memory advantage of self-attention networks over recurrent networks.

3. Embracing New Techniques in Deep Learning for Estimating Image Memorability

Coen D. Needell, Wilma A. Bainbridge

Various work has suggested that the memorability of an image is consistent across people, and thus can be treated as an intrinsic property of an image. Using computer vision models, we can make specific predictions about what people will remember or forget. While older work has used now-outdated deep learning architectures to predict image memorability, innovations in the field have given us new techniques to apply to this problem. Here, we propose and evaluate five alternative deep learning models which exploit developments in the field from the last five years, largely the introduction of residual neural networks, which are intended to allow the model to use semantic information in the memorability estimation process. These new models were tested against the prior state of the art with a combined dataset built to optimize both within-category and across-category predictions. Our findings suggest that the key prior memorability network had overstated its generalizability and was overfit on its training set. Our new models outperform this prior model, leading us to conclude that Residual Networks outperform simpler convolutional neural networks in memorability regression. We make our new state-of-the-art model readily available to the research community, allowing memory researchers to make predictions about memorability on a wider range of images.

4. Homotopies in Multiway (Non-Deterministic) Rewriting Systems as nn-Fold Categories

Xerxes D. Arsiwalla, Jonathan Gorard, Hatem Elshatlawy

We investigate the algebraic and compositional properties of multiway (non-deterministic) abstract rewriting systems, which are the archetypical structures underlying the formalism of the so-called Wolfram model. We demonstrate the existence of higher homotopies in this class of rewriting systems, where these homotopic maps are induced by the inclusion of appropriate rewriting rules taken from an abstract rulial space of all possible such rules. Furthermore, we show that a multiway rewriting system with homotopies up to order nn may naturally be formalized as an nn-fold category, such that (upon inclusion of appropriate inverse morphisms via invertible rewriting relations) the infinite limit of this structure yields an {\infty}-groupoid. Via Grothendieck’s homotopy hypothesis, this {\infty}-groupoid thus inherits the structure of a formal homotopy space. We conclude with some comments on how this computational framework of multiway rewriting systems may potentially be used for making formal connections to homotopy spaces upon which models of physics can be instantiated.