1. True Few-Shot Learning with Language Models
Ethan Perez, Douwe Kiela, Kyunghyun Cho
Pretrained language models (LMs) perform well on many tasks even when learning from a few examples, but prior work uses many held-out examples to tune various aspects of learning, such as hyperparameters, training objectives, and natural language templates (“prompts”). Here, we evaluate the few-shot ability of LMs when such held-out examples are unavailable, a setting we call true few-shot learning. We test two model selection criteria, cross-validation and minimum description length, for choosing LM prompts and hyperparameters in the true few-shot setting. On average, both marginally outperform random selection and greatly underperform selection based on held-out examples. Moreover, selection criteria often prefer models that perform significantly worse than randomly-selected ones. We find similar results even when taking into account our uncertainty in a model’s true performance during selection, as well as when varying the amount of computation and number of examples used for selection. Overall, our findings suggest that prior work significantly overestimated the true few-shot ability of LMs given the difficulty of few-shot model selection.
Language models are amazing few-shot learners with the right prompt, but how do we choose the right prompt? It turns out that people use large held-out sets(!). How do models like GPT3 do in a true few-shot setting?
— Ethan Perez (@EthanJPerez) May 25, 2021
Much worse: https://t.co/3YsG0U98Cl
w/ @douwekiela @kchonyc
1/N pic.twitter.com/kLPJ5WVXaO
2. Self-Attention Networks Can Process Bounded Hierarchical Languages
Shunyu Yao, Binghui Peng, Christos Papadimitriou, Karthik Narasimhan
Despite their impressive performance in NLP, self-attention networks were recently proved to be limited for processing formal languages with hierarchical structure, such as , the language consisting of well-nested parentheses of types. This suggested that natural language can be approximated well with models that are too weak for formal languages, or that the role of hierarchy and recursion in natural language might be limited. We qualify this implication by proving that self-attention networks can process , the subset of with depth bounded by , which arguably better captures the bounded hierarchical structure of natural language. Specifically, we construct a hard-attention network with layers and memory size (per token per layer) that recognizes , and a soft-attention network with two layers and memory size that generates . Experiments show that self-attention networks trained on generalize to longer inputs with near-perfect accuracy, and also verify the theoretical memory advantage of self-attention networks over recurrent networks.
Self-Attention Networks Can Process Bounded Hierarchical Languages
— AK (@ak92501) May 25, 2021
pdf: https://t.co/aFQoYe2424
abs: https://t.co/Jp5pBICgAg pic.twitter.com/iabm6IWVdH
Hierarchical structure is a core aspect of language syntax. Recurrent networks can systematically process recursion by emulating stacks, but can self-attention networks? If so, how?
— Shunyu Yao (@ShunyuYao12) May 25, 2021
Our #ACL2021 paper shed lights into this fundamental issue!https://t.co/AX1e15vl0s
(1/5) pic.twitter.com/MVMT3kMdSp
3. Embracing New Techniques in Deep Learning for Estimating Image Memorability
Coen D. Needell, Wilma A. Bainbridge
Various work has suggested that the memorability of an image is consistent across people, and thus can be treated as an intrinsic property of an image. Using computer vision models, we can make specific predictions about what people will remember or forget. While older work has used now-outdated deep learning architectures to predict image memorability, innovations in the field have given us new techniques to apply to this problem. Here, we propose and evaluate five alternative deep learning models which exploit developments in the field from the last five years, largely the introduction of residual neural networks, which are intended to allow the model to use semantic information in the memorability estimation process. These new models were tested against the prior state of the art with a combined dataset built to optimize both within-category and across-category predictions. Our findings suggest that the key prior memorability network had overstated its generalizability and was overfit on its training set. Our new models outperform this prior model, leading us to conclude that Residual Networks outperform simpler convolutional neural networks in memorability regression. We make our new state-of-the-art model readily available to the research community, allowing memory researchers to make predictions about memorability on a wider range of images.
New preprint w/ @CoenNeedell: improving DNN predictions of image memorability! It uses conceptual info in residual networks to reach 68% prediction accuracy & visualize the features. The model's easy to use--just go here! https://t.co/QuI4HuHPz3https://t.co/0Jbvgo1Q5y
— Wilma Bainbridge (@WilmaBainbridge) May 25, 2021
4. Homotopies in Multiway (Non-Deterministic) Rewriting Systems as -Fold Categories
Xerxes D. Arsiwalla, Jonathan Gorard, Hatem Elshatlawy
- retweets: 35, favorites: 42 (05/26/2021 09:34:07)
- links: abs | pdf
- math.CT | cs.DM | cs.LO | math-ph | math.CO
We investigate the algebraic and compositional properties of multiway (non-deterministic) abstract rewriting systems, which are the archetypical structures underlying the formalism of the so-called Wolfram model. We demonstrate the existence of higher homotopies in this class of rewriting systems, where these homotopic maps are induced by the inclusion of appropriate rewriting rules taken from an abstract rulial space of all possible such rules. Furthermore, we show that a multiway rewriting system with homotopies up to order may naturally be formalized as an -fold category, such that (upon inclusion of appropriate inverse morphisms via invertible rewriting relations) the infinite limit of this structure yields an -groupoid. Via Grothendieck’s homotopy hypothesis, this -groupoid thus inherits the structure of a formal homotopy space. We conclude with some comments on how this computational framework of multiway rewriting systems may potentially be used for making formal connections to homotopy spaces upon which models of physics can be instantiated.
ICYMI, our two submissions to ACT 2021 are now on arXiv. 1st presents our work on multiway string diagram rewriting, with applications to quantum information: https://t.co/JD3IPSN8s4
— Jonathan Gorard (@getjonwithit) May 25, 2021
2nd is an exploration of multiway systems as models for cohesive HoTT: https://t.co/JpvlZ7FTZE pic.twitter.com/gLAcK10Kz5