1. Inferring a Continuous Distribution of Atom Coordinates from Cryo-EM Images using VAEs
Dan Rosenbaum, Marta Garnelo, Michal Zielinski, Charlie Beattie, Ellen Clancy, Andrea Huber, Pushmeet Kohli, Andrew W. Senior, John Jumper, Carl Doersch, S. M. Ali Eslami, Olaf Ronneberger, Jonas Adler
Cryo-electron microscopy (cryo-EM) has revolutionized experimental protein structure determination. Despite advances in high resolution reconstruction, a majority of cryo-EM experiments provide either a single state of the studied macromolecule, or a relatively small number of its conformations. This reduces the effectiveness of the technique for proteins with flexible regions, which are known to play a key role in protein function. Recent methods for capturing conformational heterogeneity in cryo-EM data model it in volume space, making recovery of continuous atomic structures challenging. Here we present a fully deep-learning-based approach using variational auto-encoders (VAEs) to recover a continuous distribution of atomic protein structures and poses directly from picked particle images and demonstrate its efficacy on realistic simulated data. We hope that methods built on this work will allow incorporation of stronger prior information about protein structure and enable better understanding of non-rigid protein structures.
Proteins are not static bricks! Feasibility study to infer a continuous distribution of all states using an end-to-end model from Cryo-EM images to atom coordinates: https://t.co/CgoBlyr1Ao.@danrsm, @GarneloMarta, @MichaelZielins, @JonasAAdler, @arkitus, @CarlDoersch, @pushmeet pic.twitter.com/9lyCtBQJIG
— Olaf Ronneberger (@ORonneberger) June 29, 2021
2. CLIPDraw: Exploring Text-to-Drawing Synthesis through Language-Image Encoders
Kevin Frans, L.B. Soros, Olaf Witkowski
This work presents CLIPDraw, an algorithm that synthesizes novel drawings based on natural language input. CLIPDraw does not require any training; rather a pre-trained CLIP language-image encoder is used as a metric for maximizing similarity between the given description and a generated drawing. Crucially, CLIPDraw operates over vector strokes rather than pixel images, a constraint that biases drawings towards simpler human-recognizable shapes. Results compare between CLIPDraw and other synthesis-through-optimization methods, as well as highlight various interesting behaviors of CLIPDraw, such as satisfying ambiguous text in multiple ways, reliably producing drawings in diverse artistic styles, and scaling from simple to complex visual representations as stroke count is increased. Code for experimenting with the method is available at: https://colab.research.google.com/github/kvfrans/clipdraw/blob/main/clipdraw.ipynb
I scaled up CLIPDraw (https://t.co/vjoNGvr7Z9) a bit... "a beautiful epic wondrous fantasy painting of [the ocean / lightning / wind / a deep valley]": pic.twitter.com/qjFX7QdxPs
— Rivers Have Wings (@RiversHaveWings) June 29, 2021
CLIPDraw is a way to synthesize stroke-based drawings based on natural language input.
— Kevin Frans (@kvfrans) June 29, 2021
New work w/ @crosslabstokyo @err_more @okw !
blog: https://t.co/uPd1vZZOmB
arxiv: https://t.co/aXVIMfETSF
Colab notebook: https://t.co/2soyjd4zpe pic.twitter.com/JAQfrkPcQo
3. Multimodal Few-Shot Learning with Frozen Language Models
Maria Tsimpoukelli, Jacob Menick, Serkan Cabi, S.M. Ali Eslami, Oriol Vinyals, Felix Hill
When trained at sufficient scale, auto-regressive language models exhibit the notable ability to learn a new language task after being prompted with just a few examples. Here, we present a simple, yet effective, approach for transferring this few-shot learning ability to a multimodal setting (vision and language). Using aligned image and caption data, we train a vision encoder to represent each image as a sequence of continuous embeddings, such that a pre-trained, frozen language model prompted with this prefix generates the appropriate caption. The resulting system is a multimodal few-shot learner, with the surprising ability to learn a variety of new tasks when conditioned on examples, represented as a sequence of multiple interleaved image and text embeddings. We demonstrate that it can rapidly learn words for new objects and novel visual categories, do visual question-answering with only a handful of examples, and make use of outside knowledge, by measuring a single model on a variety of established and new benchmarks.
Our new paper shows how to prompt a pre-trained text language model with a combination of text AND images (🖼️,🔤, 🖼️,🔤, 🖼️,🔤).
— Jacob Menick (@jacobmenick) June 29, 2021
Keep the language model 🧊 frozen 🧊 and train a vision encoder to embed images into the same space as word sequences.https://t.co/Am5OWEwR0O
(1/12) pic.twitter.com/0HFzUV3qD1
4. Inverting and Understanding Object Detectors
Ang Cao, Justin Johnson
As a core problem in computer vision, the performance of object detection has improved drastically in the past few years. Despite their impressive performance, object detectors suffer from a lack of interpretability. Visualization techniques have been developed and widely applied to introspect the decisions made by other kinds of deep learning models; however, visualizing object detectors has been underexplored. In this paper, we propose using inversion as a primary tool to understand modern object detectors and develop an optimization-based approach to layout inversion, allowing us to generate synthetic images recognized by trained detectors as containing a desired configuration of objects. We reveal intriguing properties of detectors by applying our layout inversion technique to a variety of modern object detectors, and further investigate them via validation experiments: they rely on qualitatively different features for classification and regression; they learn canonical motifs of commonly co-occurring objects; they use diff erent visual cues to recognize objects of varying sizes. We hope our insights can help practitioners improve object detectors.
Inverting and Understanding Object Detectors
— AK (@ak92501) June 29, 2021
pdf: https://t.co/1CNXCDR09z
abs: https://t.co/d2kzwrrpH3
extend visualization to modern object detector and propose an optimization-based approach for layout inversion pic.twitter.com/EGY5nDycOU
5. Transflower: probabilistic autoregressive dance generation with multimodal attention
Guillermo Valle-Pérez, Gustav Eje Henter, Jonas Beskow, André Holzapfel, Pierre-Yves Oudeyer, Simon Alexanderson
- retweets: 576, favorites: 131 (06/30/2021 13:48:46)
- links: abs | pdf
- cs.SD | cs.GR | cs.LG | eess.AS
Dance requires skillful composition of complex movements that follow rhythmic, tonal and timbral features of music. Formally, generating dance conditioned on a piece of music can be expressed as a problem of modelling a high-dimensional continuous motion signal, conditioned on an audio signal. In this work we make two contributions to tackle this problem. First, we present a novel probabilistic autoregressive architecture that models the distribution over future poses with a normalizing flow conditioned on previous poses as well as music context, using a multimodal transformer encoder. Second, we introduce the currently largest 3D dance-motion dataset, obtained with a variety of motion-capture technologies, and including both professional and casual dancers. Using this dataset, we compare our new model against two baselines, via objective metrics and a user study, and show that both the ability to model a probability distribution, as well as being able to attend over a large motion and music context are necessary to produce interesting, diverse, and realistic dance that matches the music.
Transflower: probabilistic autoregressive dance generation with multimodal attention
— AK (@ak92501) June 29, 2021
pdf: https://t.co/hkaFdNTKYw
abs: https://t.co/dgnF24FNJ5
project page: https://t.co/HP4U6cy1Yh
github: https://t.co/eibtrYbZzO pic.twitter.com/cPKd8EdfIz
6. Early Convolutions Help Transformers See Better
Tete Xiao, Mannat Singh, Eric Mintun, Trevor Darrell, Piotr Dollár, Ross Girshick
Vision transformer (ViT) models exhibit substandard optimizability. In particular, they are sensitive to the choice of optimizer (AdamW vs. SGD), optimizer hyperparameters, and training schedule length. In comparison, modern convolutional neural networks are far easier to optimize. Why is this the case? In this work, we conjecture that the issue lies with the patchify stem of ViT models, which is implemented by a stride-p pxp convolution (p=16 by default) applied to the input image. This large-kernel plus large-stride convolution runs counter to typical design choices of convolutional layers in neural networks. To test whether this atypical design choice causes an issue, we analyze the optimization behavior of ViT models with their original patchify stem versus a simple counterpart where we replace the ViT stem by a small number of stacked stride-two 3x3 convolutions. While the vast majority of computation in the two ViT designs is identical, we find that this small change in early visual processing results in markedly different training behavior in terms of the sensitivity to optimization settings as well as the final model accuracy. Using a convolutional stem in ViT dramatically increases optimization stability and also improves peak performance (by ~1-2% top-1 accuracy on ImageNet-1k), while maintaining flops and runtime. The improvement can be observed across the wide spectrum of model complexities (from 1G to 36G flops) and dataset scales (from ImageNet-1k to ImageNet-21k). These findings lead us to recommend using a standard, lightweight convolutional stem for ViT models as a more robust architectural choice compared to the original ViT model design.
Early Convolutions Help Transformers See Better
— AK (@ak92501) June 29, 2021
pdf: https://t.co/5XTWUDzFag
abs: https://t.co/Faq0Yi18Bi
convolutional stem in ViT dramatically increases optimization stability and also improves peak performance (by ∼1-2% top-1 accuracy on ImageNet-1k) pic.twitter.com/q0gq67AyuF
7. Multi-task curriculum learning in a complex, visual, hard-exploration domain: Minecraft
Ingmar Kanitscheider, Joost Huizinga, David Farhi, William Hebgen Guss, Brandon Houghton, Raul Sampedro, Peter Zhokhov, Bowen Baker, Adrien Ecoffet, Jie Tang, Oleg Klimov, Jeff Clune
An important challenge in reinforcement learning is training agents that can solve a wide variety of tasks. If tasks depend on each other (e.g. needing to learn to walk before learning to run), curriculum learning can speed up learning by focusing on the next best task to learn. We explore curriculum learning in a complex, visual domain with many hard exploration challenges: Minecraft. We find that learning progress (defined as a change in success probability of a task) is a reliable measure of learnability for automatically constructing an effective curriculum. We introduce a learning-progress based curriculum and test it on a complex reinforcement learning problem (called “Simon Says”) where an agent is instructed to obtain a desired goal item. Many of the required skills depend on each other. Experiments demonstrate that: (1) a within-episode exploration bonus for obtaining new items improves performance, (2) dynamically adjusting this bonus across training such that it only applies to items the agent cannot reliably obtain yet further increases performance, (3) the learning-progress based curriculum elegantly follows the learning curve of the agent, and (4) when the learning-progress based curriculum is combined with the dynamic exploration bonus it learns much more efficiently and obtains far higher performance than uniform baselines. These results suggest that combining intra-episode and across-training exploration bonuses with learning progress creates a promising method for automated curriculum generation, which may substantially increase our ability to train more capable, generally intelligent agents.
Multi-task curriculum learning in a complex, visual, hard-exploration domain: Minecraft
— Aran Komatsuzaki (@arankomatsuzaki) June 29, 2021
Combining intra-episode and across-training exploration bonuses with learning progress creates a promising method for automated curriculum generation.https://t.co/etnZD9bezT pic.twitter.com/gC7dDVEUeE
Thrilled to share "Multi-task curriculum learning in a complex, visual, hard-exploration domain: Minecraft"!
— Ingmar Kanitscheider (@ingkanit) June 29, 2021
Curriculum learning plus a dynamic exploration bonus enables agents to obtain items far up the Minecraft tech tree.
Paper: https://t.co/WysVwjJvsjhttps://t.co/Rl4uAmbss5
8. SymbolicGPT: A Generative Transformer Model for Symbolic Regression
Mojtaba Valipour, Bowen You, Maysum Panju, Ali Ghodsi
Symbolic regression is the task of identifying a mathematical expression that best fits a provided dataset of input and output values. Due to the richness of the space of mathematical expressions, symbolic regression is generally a challenging problem. While conventional approaches based on genetic evolution algorithms have been used for decades, deep learning-based methods are relatively new and an active research area. In this work, we present SymbolicGPT, a novel transformer-based language model for symbolic regression. This model exploits the advantages of probabilistic language models like GPT, including strength in performance and flexibility. Through comprehensive experiments, we show that our model performs strongly compared to competing models with respect to the accuracy, running time, and data efficiency.
SymbolicGPT: A Generative Transformer Model for Symbolic Regression
— AK (@ak92501) June 29, 2021
pdf: https://t.co/FbTgBbXYpY
abs: https://t.co/5GbC9VyGcN
github: https://t.co/yKXjuUuguG
transformer-based language model for symbolic regression pic.twitter.com/yYF7SAAz69
9. Automatic Differentiation With Higher Infinitesimals, or Computational Smooth Infinitesimal Analysis in Weil Algebra
Hiromi Ishii
- retweets: 210, favorites: 67 (06/30/2021 13:48:47)
- links: abs | pdf
- cs.SC | cs.MS | math.CT | math.DG | math.NA
We propose an algorithm to compute the -ring structure of arbitrary Weil algebra. It allows us to do some analysis with higher infinitesimals numerically and symbolically. To that end, we first give a brief description of the (Forward-mode) automatic differentiation (AD) in terms of -rings. The notion of a -ring was introduced by Lawvere and used as the fundamental building block of smooth infinitesimal analysis and synthetic differential geometry. We argue that interpreting AD in terms of -rings gives us a unifying theoretical framework and modular ways to express multivariate partial derivatives. In particular, we can “package” higher-order Forward-mode AD as a Weil algebra, and take tensor products to compose them to achieve multivariate higher-order AD. The algorithms in the present paper can also be used for a pedagogical purpose in learning and studying smooth infinitesimal analysis as well.
CASCでの発表論文を公開しました。一階の自動微分が R[x]/x^2 を使った無限小解析と関連するのは folklore でしたが、一般の自動微分を総合微分幾何で使われる C^∞-環の概念を使って定式化して、一般の冪零無限小を使って高次の自動微分を合成する手法を提案しています。https://t.co/PcZVUnfb7L
— スマートコン (@mr_konn) June 29, 2021
10. Discovering Generalizable Skills via Automated Generation of Diverse Tasks
Kuan Fang, Yuke Zhu, Silvio Savarese, Li Fei-Fei
The learning efficiency and generalization ability of an intelligent agent can be greatly improved by utilizing a useful set of skills. However, the design of robot skills can often be intractable in real-world applications due to the prohibitive amount of effort and expertise that it requires. In this work, we introduce Skill Learning In Diversified Environments (SLIDE), a method to discover generalizable skills via automated generation of a diverse set of tasks. As opposed to prior work on unsupervised discovery of skills which incentivizes the skills to produce different outcomes in the same environment, our method pairs each skill with a unique task produced by a trainable task generator. To encourage generalizable skills to emerge, our method trains each skill to specialize in the paired task and maximizes the diversity of the generated tasks. A task discriminator defined on the robot behaviors in the generated tasks is jointly trained to estimate the evidence lower bound of the diversity objective. The learned skills can then be composed in a hierarchical reinforcement learning algorithm to solve unseen target tasks. We demonstrate that the proposed method can effectively learn a variety of robot skills in two tabletop manipulation domains. Our results suggest that the learned skills can effectively improve the robot’s performance in various unseen target tasks compared to existing reinforcement learning and skill learning methods.
Can robots discover diverse skills by proposing a diverse set of tasks? SLIDE trains each robot skill to solve a unique generated task and maximize the task diversity.
— Kuan Fang (@KuanFang) June 29, 2021
Project: https://t.co/bqk3SkDKYx
Paper: https://t.co/a4YMRFDYKO
w/ @yukez @silviocinguetta @drfeifei pic.twitter.com/Yi28erNS8h
11. Conormal Spaces and Whitney Stratifications
Martin Helmer, Vidit Nanda
- retweets: 96, favorites: 67 (06/30/2021 13:48:47)
- links: abs | pdf
- math.AG | cs.SC | math.AC | math.AT
We describe a new algorithm for computing Whitney stratifications of complex projective varieties. The main ingredients are (a) an algebraic criterion, due to L^e and Teissier, which reformulates Whitney regularity in terms of conormal spaces and maps, and (b) a new interpretation of this conormal criterion via ideal saturations, which can be practically implemented on a computer. We show that this algorithm improves upon the existing state of the art by several orders of magnitude, even for relatively small input varieties. En route, we introduce related algorithms for efficiently stratifying affine varieties, flags on a given variety, and algebraic maps.
My first foray into computational algebraic geometry, joint with Martin Helmer, is now on arxiv. I am far too excited about this work to play it cool, so here's thread about our paper https://t.co/hSz04FJJ83
— Vidit Nanda (@viditnanda) June 29, 2021
12. Data Poisoning Won’t Save You From Facial Recognition
Evani Radiya-Dixit, Florian Tramèr
Data poisoning has been proposed as a compelling defense against facial recognition models trained on Web-scraped pictures. By perturbing the images they post online, users can fool models into misclassifying future (unperturbed) pictures. We demonstrate that this strategy provides a false sense of security, as it ignores an inherent asymmetry between the parties: users’ pictures are perturbed once and for all before being published (at which point they are scraped) and must thereafter fool all future models — including models trained adaptively against the users’ past attacks, or models that use technologies discovered after the attack. We evaluate two systems for poisoning attacks against large-scale facial recognition, Fawkes (500,000+ downloads) and LowKey. We demonstrate how an “oblivious” model trainer can simply wait for future developments in computer vision to nullify the protection of pictures collected in the past. We further show that an adversary with black-box access to the attack can (i) train a robust model that resists the perturbations of collected pictures and (ii) detect poisoned pictures uploaded online. We caution that facial recognition poisoning will not admit an “arms race” between attackers and defenders. Once perturbed pictures are scraped, the attack cannot be changed so any future successful defense irrevocably undermines users’ privacy.
Web-scale facial recognition is getting scarily good (see https://t.co/zha60zO5Xf)
— Florian Tramèr (@florian_tramer) June 29, 2021
Popular tools like https://t.co/D7AWts5hvg (500'000+ downloads!) fight back using adversarial examples.
With @evanidixit, we argue that this is hopeless (and dangerous)! https://t.co/WONdANyFcQ
13. Unsupervised Discovery of Actions in Instructional Videos
AJ Piergiovanni, Anelia Angelova, Michael S. Ryoo, Irfan Essa
In this paper we address the problem of automatically discovering atomic actions in unsupervised manner from instructional videos. Instructional videos contain complex activities and are a rich source of information for intelligent agents, such as, autonomous robots or virtual assistants, which can, for example, automatically `read’ the steps from an instructional video and execute them. However, videos are rarely annotated with atomic activities, their boundaries or duration. We present an unsupervised approach to learn atomic actions of structured human tasks from a variety of instructional videos. We propose a sequential stochastic autoregressive model for temporal segmentation of videos, which learns to represent and discover the sequential relationship between different atomic actions of the task, and which provides automatic and unsupervised self-labeling for videos. Our approach outperforms the state-of-the-art unsupervised methods with large margins. We will open source the code.
Unsupervised Discovery of Actions in Instructional Videos
— AK (@ak92501) June 29, 2021
pdf: https://t.co/RKpvdmkIQb
abs: https://t.co/YRgGsaEPmZ pic.twitter.com/mwd515LYAw
14. Low-Precision Training in Logarithmic Number System using Multiplicative Weight Update
Jiawei Zhao, Steve Dai, Rangharajan Venkatesan, Ming-Yu Liu, Brucek Khailany, Bill Dally, Anima Anandkumar
Training large-scale deep neural networks (DNNs) currently requires a significant amount of energy, leading to serious environmental impacts. One promising approach to reduce the energy costs is representing DNNs with low-precision numbers. While it is common to train DNNs with forward and backward propagation in low-precision, training directly over low-precision weights, without keeping a copy of weights in high-precision, still remains to be an unsolved problem. This is due to complex interactions between learning algorithms and low-precision number systems. To address this, we jointly design a low-precision training framework involving a logarithmic number system (LNS) and a multiplicative weight update training method, termed LNS-Madam. LNS has a high dynamic range even in a low-bitwidth setting, leading to high energy efficiency and making it relevant for on-board training in energy-constrained edge devices. We design LNS to have the flexibility of choosing different bases for weights and gradients, as they usually require different quantization gaps and dynamic ranges during training. By drawing the connection between LNS and multiplicative update, LNS-Madam ensures low quantization error during weight update, leading to a stable convergence even if the bitwidth is limited. Compared to using a fixed-point or floating-point number system and training with popular learning algorithms such as SGD and Adam, our joint design with LNS and LNS-Madam optimizer achieves better accuracy while requiring smaller bitwidth. Notably, with only 5-bit for gradients, the proposed training framework achieves accuracy comparable to full-precision state-of-the-art models such as ResNet-50 and BERT. After conducting energy estimations by analyzing the math datapath units during training, the results show that our design achieves over 60x energy reduction compared to FP32 on BERT models.
Low-Precision Training in Logarithmic Number System
— AK (@ak92501) June 29, 2021
using Multiplicative Weight Update
pdf: https://t.co/4Jy5PnfC6g
over 60x energy reduction compared to FP32 on BERT models. For full training of ResNet-50 on ImageNet, design reduces the carbon emissions by 98% around. pic.twitter.com/kWYTHSVlZp
Low-Precision Training in Logarithmic Number System using Multiplicative Weight Update
— Aran Komatsuzaki (@arankomatsuzaki) June 29, 2021
With only 5-bit for backward pass, multi-base LNS achieves accuracy comparable to full-precision SotA models such as ResNet-50 and BERT. https://t.co/du9dvNp28A pic.twitter.com/RyAAysldFM
15. Unsupervised Skill Discovery with Bottleneck Option Learning
Jaekyeom Kim, Seohong Park, Gunhee Kim
Having the ability to acquire inherent skills from environments without any external rewards or supervision like humans is an important problem. We propose a novel unsupervised skill discovery method named Information Bottleneck Option Learning (IBOL). On top of the linearization of environments that promotes more various and distant state transitions, IBOL enables the discovery of diverse skills. It provides the abstraction of the skills learned with the information bottleneck framework for the options with improved stability and encouraged disentanglement. We empirically demonstrate that IBOL outperforms multiple state-of-the-art unsupervised skill discovery methods on the information-theoretic evaluations and downstream tasks in MuJoCo environments, including Ant, HalfCheetah, Hopper and D’Kitty.
Unsupervised Skill Discovery with Bottleneck Option Learning
— AK (@ak92501) June 29, 2021
pdf: https://t.co/lnJmhVI5fr
outperforms multiple sota unsupervised skill discovery methods on the information-theoretic evaluations and downstream tasks in MuJoCo environments pic.twitter.com/LSFHgejjzb
16. Mapping flows on weighted and directed networks with incomplete observations
Jelena Smiljanić, Christopher Blöcker, Daniel Edler, Martin Rosvall
- retweets: 34, favorites: 36 (06/30/2021 13:48:48)
- links: abs | pdf
- cs.SI | physics.data-an | physics.soc-ph
Detecting significant community structure in networks with incomplete observations is challenging because the evidence for specific solutions fades away with missing data. For example, recent research shows that flow-based community detection methods can highlight spurious communities in sparse undirected and unweighted networks with missing links. Current Bayesian approaches developed to overcome this problem do not work for incomplete observations in weighted and directed networks that describe network flows. To address this gap, we extend the idea behind the Bayesian estimate of the map equation for unweighted and undirected networks to enable more robust community detection in weighted and directed networks. We derive a weighted and directed prior network that can incorporate metadata information and show how an efficient implementation in the community-detection method Infomap provides more reliable communities even with a significant fraction of data missing.
We extend the idea behind the Bayesian estimate of the map equation for unweighted and undirected networks to enable more robust community detection in weighted and directed networks. Teleportation becomes an asset! https://t.co/vznKxuf3h0 pic.twitter.com/CigDlz3uzn
— Martin Rosvall (@m_rosvall) June 29, 2021
17. RadGraph: Extracting Clinical Entities and Relations from Radiology Reports
Saahil Jain, Ashwin Agrawal, Adriel Saporta, Steven QH Truong, Du Nguyen Duong, Tan Bui, Pierre Chambon, Yuhao Zhang, Matthew P. Lungren, Andrew Y. Ng, Curtis P. Langlotz, Pranav Rajpurkar
Extracting structured clinical information from free-text radiology reports can enable the use of radiology report information for a variety of critical healthcare applications. In our work, we present RadGraph, a dataset of entities and relations in full-text chest X-ray radiology reports based on a novel information extraction schema we designed to structure radiology reports. We release a development dataset, which contains board-certified radiologist annotations for 500 radiology reports from the MIMIC-CXR dataset (14,579 entities and 10,889 relations), and a test dataset, which contains two independent sets of board-certified radiologist annotations for 100 radiology reports split equally across the MIMIC-CXR and CheXpert datasets. Using these datasets, we train and test a deep learning model, RadGraph Benchmark, that achieves a micro F1 of 0.82 and 0.73 on relation extraction on the MIMIC-CXR and CheXpert test sets respectively. Additionally, we release an inference dataset, which contains annotations automatically generated by RadGraph Benchmark across 220,763 MIMIC-CXR reports (around 6 million entities and 4 million relations) and 500 CheXpert reports (13,783 entities and 9,908 relations) with mappings to associated chest radiographs. Our freely available dataset can facilitate a wide range of research in medical natural language processing, as well as computer vision and multi-modal learning when linked to chest radiographs.
Hot off the press: So excited about this new work extracting knowledge graphs from radiology reports--something I have wanted to pursue for many years. Congrats to a fantastic team of @stanford students working with collaborators from @VinBrainAI. https://t.co/ItUEGmWuk7
— Curt Langlotz (@curtlanglotz) June 29, 2021
18. Rethinking Token-Mixing MLP for MLP-based Vision Backbone
Tan Yu, Xu Li, Yunfeng Cai, Mingming Sun, Ping Li
In the past decade, we have witnessed rapid progress in the machine vision backbone. By introducing the inductive bias from the image processing, convolution neural network (CNN) has achieved excellent performance in numerous computer vision tasks and has been established as \emph{de facto} backbone. In recent years, inspired by the great success achieved by Transformer in NLP tasks, vision Transformer models emerge. Using much less inductive bias, they have achieved promising performance in computer vision tasks compared with their CNN counterparts. More recently, researchers investigate using the pure-MLP architecture to build the vision backbone to further reduce the inductive bias, achieving good performance. The pure-MLP backbone is built upon channel-mixing MLPs to fuse the channels and token-mixing MLPs for communications between patches. In this paper, we re-think the design of the token-mixing MLP. We discover that token-mixing MLPs in existing MLP-based backbones are spatial-specific, and thus it is sensitive to spatial translation. Meanwhile, the channel-agnostic property of the existing token-mixing MLPs limits their capability in mixing tokens. To overcome those limitations, we propose an improved structure termed as Circulant Channel-Specific (CCS) token-mixing MLP, which is spatial-invariant and channel-specific. It takes fewer parameters but achieves higher classification accuracy on ImageNet1K benchmark.
Rethinking Token-Mixing MLP for MLP-based Vision Backbone
— AK (@ak92501) June 29, 2021
pdf: https://t.co/tnezmNHA10
abs: https://t.co/SwAyBCBW4Q
propose a Circulant Channel-specific (CCS) token-mixing MLP, which is spatial-agnostic and channel-specific pic.twitter.com/Tbl4YHWJeV
19. Draw Me a Flower: Grounding Formal Abstract Structures Stated in Informal Natural Language
Royi Lachmy, Valentina Pyatkin, Reut Tsarfaty
Forming and interpreting abstraction is a core process in human communication. In particular, when giving and performing complex instructions stated in natural language (NL), people may naturally evoke abstract constructs such as objects, loops, conditions and functions to convey their intentions in an efficient and precise way. Yet, interpreting and grounding abstraction stated in NL has not been systematically studied in NLP/AI. To elicit naturally-occurring abstractions in NL we develop the Hexagons referential game, where players describe increasingly complex images on a two-dimensional Hexagons board, and other players need to follow these instructions to recreate the images. Using this game we collected the Hexagons dataset, which consists of 164 images and over 3000 naturally-occurring instructions, rich with diverse abstractions. Results of our baseline models on an instruction-to-execution task derived from the Hexagons dataset confirm that higher-level abstractions in NL are indeed more challenging for current systems to process. Thus, this dataset exposes a new and challenging dimension for grounded semantic parsing, and we propose it for the community as a future benchmark to explore more sophisticated and high-level communication within NLP applications.
20. Robust Pose Transfer with Dynamic Details using Neural Video Rendering
Yang-tian Sun, Hao-zhi Huang, Xuan Wang, Yu-kun Lai, Wei Liu, Lin Gao
Pose transfer of human videos aims to generate a high fidelity video of a target person imitating actions of a source person. A few studies have made great progress either through image translation with deep latent features or neural rendering with explicit 3D features. However, both of them rely on large amounts of training data to generate realistic results, and the performance degrades on more accessible internet videos due to insufficient training frames. In this paper, we demonstrate that the dynamic details can be preserved even trained from short monocular videos. Overall, we propose a neural video rendering framework coupled with an image-translation-based dynamic details generation network (D2G-Net), which fully utilizes both the stability of explicit 3D features and the capacity of learning components. To be specific, a novel texture representation is presented to encode both the static and pose-varying appearance characteristics, which is then mapped to the image space and rendered as a detail-rich frame in the neural rendering stage. Moreover, we introduce a concise temporal loss in the training stage to suppress the detail flickering that is made more visible due to high-quality dynamic details generated by our method. Through extensive comparisons, we demonstrate that our neural human video renderer is capable of achieving both clearer dynamic details and more robust performance even on accessible short videos with only 2k - 4k frames.
Robust Pose Transfer with Dynamic Details using Neural Video Rendering
— AK (@ak92501) June 29, 2021
pdf: https://t.co/XEwCmGdz8p
abs: https://t.co/EZDivY0HFk pic.twitter.com/Gtw1eLzVwJ