1. Convolution-Free Medical Image Segmentation using Transformers
Davood Karimi, Serge Vasylechko, Ali Gholipour
Like other applications in computer vision, medical image segmentation has been most successfully addressed using deep learning models that rely on the convolution operation as their main building block. Convolutions enjoy important properties such as sparse interactions, weight sharing, and translation equivariance. These properties give convolutional neural networks (CNNs) a strong and useful inductive bias for vision tasks. In this work we show that a different method, based entirely on self-attention between neighboring image patches and without any convolution operations, can achieve competitive or better results. Given a 3D image block, our network divides it into 3D patches, where and computes a 1D embedding for each patch. The network predicts the segmentation map for the center patch of the block based on the self-attention between these patch embeddings. We show that the proposed model can achieve segmentation accuracies that are better than the state of the art CNNs on three datasets. We also propose methods for pre-training this model on large corpora of unlabeled images. Our experiments show that with pre-training the advantage of our proposed network over CNNs can be significant when labeled training data is small.
Convolution-Free Medical Image Segmentation
— AK (@ak92501) March 1, 2021
using Transformers
pdf: https://t.co/pqXHoVMlUD
abs: https://t.co/jS3S3YySMB pic.twitter.com/cSAGzdRBVI
2. Named Tensor Notation
David Chiang, Alexander M. Rush, Boaz Barak
We propose a notation for tensors with named axes, which relieves the author, reader, and future implementers from the burden of keeping track of the order of axes and the purpose of each. It also makes it easy to extend operations on low-order tensors to higher order ones (e.g., to extend an operation on images to minibatches of images, or extend the attention mechanism to multiple attention heads). After a brief overview of our notation, we illustrate it through several examples from modern machine learning, from building blocks like attention and convolution to full models like Transformers and LeNet. Finally, we give formal definitions and describe some extensions. Our proposals build on ideas from many previous papers and software libraries. We hope that this document will encourage more authors to use named tensors, resulting in clearer papers and less bug-prone implementations. The source code for this document can be found at https://github.com/namedtensor/notation/. We invite anyone to make comments on this proposal by submitting issues or pull requests on this repository.
Named Tensor Notation (v1.0 release w/ @davidweichiang, @boazbaraktcs ) - a "dangerous and irresponsible" proposal for reproducible math in deep learning.
— Sasha Rush (@srush_nlp) March 1, 2021
PDF: https://t.co/1jvhpG7yCH
Comments: https://t.co/76h9E1bUWT
Why not Einsum? https://t.co/St0anL4v74 pic.twitter.com/gbdGUBSS0C
3. Iterative SE(3)-Transformers
Fabian B. Fuchs, Edward Wagstaff, Justas Dauparas, Ingmar Posner
When manipulating three-dimensional data, it is possible to ensure that rotational and translational symmetries are respected by applying so-called SE(3)-equivariant models. Protein structure prediction is a prominent example of a task which displays these symmetries. Recent work in this area has successfully made use of an SE(3)-equivariant model, applying an iterative SE(3)-equivariant attention mechanism. Motivated by this application, we implement an iterative version of the SE(3)-Transformer, an SE(3)-equivariant attention-based model for graph data. We address the additional complications which arise when applying the SE(3)-Transformer in an iterative fashion, compare the iterative and single-pass versions on a toy problem, and consider why an iterative model may be beneficial in some problem settings. We make the code for our implementation available to the community.
Iterative SE(3)-Transformers
— AK (@ak92501) March 1, 2021
pdf: https://t.co/slOxl8Yi2I
abs: https://t.co/rSblKEEvFZ pic.twitter.com/xLMitdhSTr
4. Swift for TensorFlow: A portable, flexible platform for deep learning
Brennan Saeta, Denys Shabalin, Marc Rasi, Brad Larson, Xihui Wu, Parker Schuh, Michelle Casbon, Daniel Zheng, Saleem Abdulrasool, Aleksandr Efremov, Dave Abrahams, Chris Lattner, Richard Wei
Swift for TensorFlow is a deep learning platform that scales from mobile devices to clusters of hardware accelerators in data centers. It combines a language-integrated automatic differentiation system and multiple Tensor implementations within a modern ahead-of-time compiled language oriented around mutable value semantics. The resulting platform has been validated through use in over 30 deep learning models and has been employed across data center and mobile applications.
As promised ~2 weeks ago, some academic papers about #S4TF are now available! First up is “the overview paper" (https://t.co/IsqFJAhuBZ); highlights include: (1) a discussion on how mutable value semantics is incredibly powerful (especially for autodiff & hw acclrs), and …
— Brennan Saeta (@bsaeta) March 1, 2021
5. RbSyn: Type- and Effect-Guided Program Synthesis
Sankha Narayan Guria, Jeffrey S. Foster, David Van Horn
In recent years, researchers have explored component-based synthesis, which aims to automatically construct programs that operate by composing calls to existing APIs. However, prior work has not considered efficient synthesis of methods with side effects, e.g., web app methods that update a database. In this paper, we introduce RbSyn, a novel type- and effect-guided synthesis tool for Ruby. An RbSyn synthesis goal is specified as the type for the target method and a series of test cases it must pass. RbSyn works by recursively generating well-typed candidate method bodies whose write effects match the read effects of the test case assertions. After finding a set of candidates that separately satisfy each test, RbSyn synthesizes a solution that branches to execute the correct candidate code under the appropriate conditions. We formalize RbSyn on a core, object-oriented language and describe how the key ideas of the model are scaled-up in our implementation for Ruby. We evaluated RbSyn on 19 benchmarks, 12 of which come from popular, open-source Ruby apps. We found that RbSyn synthesizes correct solutions for all benchmarks, with 15 benchmarks synthesizing in under 9 seconds, while the slowest benchmark takes 83 seconds. Using observed reads to guide synthesize is effective: using type-guidance alone times out on 10 of 12 app benchmarks. We also found that using less precise effect annotations leads to worse synthesis performance. In summary, we believe type- and effect-guided synthesis is an important step forward in synthesis of effectful methods from test cases.
I am super excited to announce that my paper "RbSyn: Type- and Effect-Guided Program Synthesis" with Jeff Foster and @lambda_calculus was conditionally accepted to @PLDI 2021.
— Sankha Narayan Guria (@ngsankha) March 1, 2021
Early preprint: https://t.co/ERyRvA0SVj
6. Evolution of collective fairness in complex networks through degree-based role assignment
Andreia Sofia Teixeira, Francisco C. Santos, Alexandre P. Francisco, Fernando P. Santos
- retweets: 30, favorites: 28 (03/02/2021 09:04:22)
- links: abs | pdf
- physics.soc-ph | cs.GT
From social contracts to climate agreements, individuals engage in groups that must collectively reach decisions with varying levels of equality and fairness. These dilemmas also pervade Distributed Artificial Intelligence, in domains such as automated negotiation, conflict resolution or resource allocation. As evidenced by the well-known Ultimatum Game — where a Proposer has to divide a resource with a Responder — payoff-maximizing outcomes are frequently at odds with fairness. Eliciting equality in populations of self-regarding agents requires judicious interventions. Here we use knowledge about agents’ social networks to implement fairness mechanisms, in the context of Multiplayer Ultimatum Games. We focus on network-based role assignment and show that preferentially attributing the role of Proposer to low-connected nodes increases the fairness levels in a population. We evaluate the effectiveness of low-degree Proposer assignment considering networks with different average connectivity, group sizes, and group voting rules when accepting proposals (e.g. majority or unanimity). We further show that low-degree Proposer assignment is efficient, not only optimizing fairness, but also the average payoff level in the population. Finally, we show that stricter voting rules (i.e., imposing an accepting consensus as requirement for collectives to accept a proposal) attenuates the unfairness that results from situations where high-degree nodes (hubs) are the natural candidates to play as Proposers. Our results suggest new routes to use role assignment and voting mechanisms to prevent unfair behaviors from spreading on complex networks.