1. Transformer is All You Need: Multimodal Multitask Learning with a Unified Transformer
Ronghang Hu, Amanpreet Singh
We propose UniT, a Unified Transformer model to simultaneously learn the most prominent tasks across different domains, ranging from object detection to language understanding and multimodal reasoning. Based on the transformer encoder-decoder architecture, our UniT model encodes each input modality with an encoder and makes predictions on each task with a shared decoder over the encoded input representations, followed by task-specific output heads. The entire model is jointly trained end-to-end with losses from each task. Compared to previous efforts on multi-task learning with transformers, we share the same model parameters to all tasks instead of separately fine-tuning task-specific models and handle a much higher variety of tasks across different domains. In our experiments, we learn 7 tasks jointly over 8 datasets, achieving comparable performance to well-established prior work on each domain under the same supervision with a compact set of model parameters. Code will be released in MMF at https://mmf.sh.
みんな絶対来ると思っていた"Transformer is All You Need"論文が爆誕https://t.co/PcSTPGDXsz pic.twitter.com/7fyR1bFYCW
— えるエル (@ImAI_Eruel) February 23, 2021
2. Towards Causal Representation Learning
Bernhard Schölkopf, Francesco Locatello, Stefan Bauer, Nan Rosemary Ke, Nal Kalchbrenner, Anirudh Goyal, Yoshua Bengio
The two fields of machine learning and graphical causality arose and developed separately. However, there is now cross-pollination and increasing interest in both fields to benefit from the advances of the other. In the present paper, we review fundamental concepts of causal inference and relate them to crucial open problems of machine learning, including transfer and generalization, thereby assaying how causality can contribute to modern machine learning research. This also applies in the opposite direction: we note that most work in causality starts from the premise that the causal variables are given. A central problem for AI and causality is, thus, causal representation learning, the discovery of high-level causal variables from low-level observations. Finally, we delineate some implications of causality for machine learning and propose key research areas at the intersection of both communities.
Towards Causal Representation Learning@bschoelkopf, @FrancescoLocat8 , Stefan Bauer, @rosemary_ke , @NalKalchbrenner , Yoshua Bengio@Mila_Quebec https://t.co/mCknrqLW87
— Anirudh Goyal (@anirudhg9119) February 23, 2021
Towards Causal Representation Learning: led by @bschoelkopf and myself, with amazing co-authors Stefan Bauer, @rosemary_ke, @NalKalchbrenner, @anirudhg9119, Yoshua Bengio, accepted in the Proceedings of the IEEE.
— Francesco Locatello (@FrancescoLocat8) February 23, 2021
Link: https://t.co/CENDcC2GRd pic.twitter.com/BOidP4rDJm
3. Abstraction and Analogy-Making in Artificial Intelligence
Melanie Mitchell
Conceptual abstraction and analogy-making are key abilities underlying humans’ abilities to learn, reason, and robustly adapt their knowledge to new domains. Despite of a long history of research on constructing AI systems with these abilities, no current AI system is anywhere close to a capability of forming humanlike abstractions or analogies. This paper reviews the advantages and limitations of several approaches toward this goal, including symbolic methods, deep learning, and probabilistic program induction. The paper concludes with several proposals for designing challenge tasks and evaluation measures in order to make quantifiable and generalizable progress in this area.
New paper from me: "Abstraction and Analogy-Making in Artificial Intelligence": https://t.co/LJvjN7Q78C
— Melanie Mitchell (@MelMitchell1) February 23, 2021
🧵 (1/4)
4. Linear Transformers Are Secretly Fast Weight Memory Systems
Imanol Schlag, Kazuki Irie, Jürgen Schmidhuber
We show the formal equivalence of linearised self-attention mechanisms and fast weight memories from the early ’90s. From this observation we infer a memory c Capacity limitation of recent linearised softmax attention variants. With finite memory, a desirable behaviour of fast weight memory models is to manipulate the contents of memory and dynamically interact with it. Inspired by previous work on fast weights, we propose to replace the update rule by an alternative rule yielding such behaviour. We also propose a new kernel function to linearise attention, balancing simplicity and effectiveness. We conduct experiments on synthetic retrieval problems as well as standard machine translation and language modelling tasks which demonstrate the benefits of our methods.
Linear Transformers Are Secretly Fast Weight Memory Systems
— Aran Komatsuzaki (@arankomatsuzaki) February 23, 2021
Shows the formal equivalence of linearised self-attention mechanisms and fast weight memories from the early ’90s. https://t.co/SoGNlhgPqI pic.twitter.com/BSznTn8rt8
Linear Transformers Are Secretly Fast Weight Memory Systems
— AK (@ak92501) February 23, 2021
pdf: https://t.co/1VIAmxB0x2
abs: https://t.co/JpqPDlPlSq pic.twitter.com/QMsFZy50Lt
Transformerはsoftmaxを線形または変換後内積に置き換え、値/キー間の直積を先に計算した場合、Fast Weight記憶機構とみなせる。既存の値を消去し新規の値を書き込む操作を導入し、また記憶が干渉しないよう設計された変換DPFPを使ったモデルを提案 https://t.co/SkpC7DwbQu
— Daisuke Okanohara (@hillbig) February 23, 2021
5. Do Generative Models Know Disentanglement? Contrastive Learning is All You Need
Xuanchi Ren, Tao Yang, Yuwang Wang, Wenjun Zeng
Disentangled generative models are typically trained with an extra regularization term, which encourages the traversal of each latent factor to make a distinct and independent change at the cost of generation quality. When traversing the latent space of generative models trained without the disentanglement term, the generated samples show semantically meaningful change, raising the question: do generative models know disentanglement? We propose an unsupervised and model-agnostic method: Disentanglement via Contrast (DisCo) in the Variation Space. DisCo consists of: (i) a Navigator providing traversal directions in the latent space, and (ii) a -Contrastor composed of two shared-weight Encoders, which encode image pairs along these directions to disentangled representations respectively, and a difference operator to map the encoded representations to the Variation Space. We propose two more key techniques for DisCo: entropy-based domination loss to make the encoded representations more disentangled and the strategy of flipping hard negatives to address directions with the same semantic meaning. By optimizing the Navigator to discover disentangled directions in the latent space and Encoders to extract disentangled representations from images with Contrastive Learning, DisCo achieves the state-of-the-art disentanglement given pretrained non-disentangled generative models, including GAN, VAE, and Flow. Project page at https://github.com/xrenaa/DisCo.
Do Generative Models Know Disentanglement? Contrastive Learning is All You Need
— AK (@ak92501) February 23, 2021
pdf: https://t.co/XtEGWNA6r5
abs: https://t.co/Gb2iT3qiez pic.twitter.com/LWcNRwSNoO
6. Dynamical Analysis of the EIP-1559 Ethereum Fee Market
Stefanos Leonardos, Barnabé Monnot, Daniël Reijsbergen, Stratis Skoulakis, Georgios Piliouras
Participation in permissionless blockchains results in competition over system resources, which needs to be controlled with fees. Ethereum’s current fee mechanism is implemented via a first-price auction that results in unpredictable fees as well as other inefficiencies. EIP-1559 is a recent, improved proposal that introduces a number of innovative features such as a dynamically adaptive base fee that is burned, instead of being paid to the miners. Despite intense interest in understanding its properties, several basic questions such as whether and under what conditions does this protocol self-stabilize have remained elusive thus far. We perform a thorough analysis of the resulting fee market dynamic mechanism via a combination of tools from game theory and dynamical systems. We start by providing bounds on the step-size of the base fee update rule that suffice for global convergence to equilibrium via Lyapunov arguments. In the negative direction, we show that for larger step-sizes instability and even formally chaotic behavior are possible under a wide range of settings. We complement these qualitative results with quantitative bounds on the resulting range of base fees. We conclude our analysis with a thorough experimental case study that corroborates our theoretical findings.
We have a new paper on #eip1559, written with @StefLeonardos, Daniël Reijsbergen, Stratis Skoulakis and Georgios Piliouras!
— Barnabé Monnot (@barnabemonnot) February 23, 2021
"Dynamical Analysis of the EIP-1559 Ethereum Fee Market" is available on arXiv: https://t.co/rZ5R1nRmNK pic.twitter.com/j1JFwTW1zP
7. Do We Really Need Explicit Position Encodings for Vision Transformers?
Xiangxiang Chu, Bo Zhang, Zhi Tian, Xiaolin Wei, Huaxia Xia
Almost all visual transformers such as ViT or DeiT rely on predefined positional encodings to incorporate the order of each input token. These encodings are often implemented as learnable fixed-dimension vectors or sinusoidal functions of different frequencies, which are not possible to accommodate variable-length input sequences. This inevitably limits a wider application of transformers in vision, where many tasks require changing the input size on-the-fly. In this paper, we propose to employ a conditional position encoding scheme, which is conditioned on the local neighborhood of the input token. It is effortlessly implemented as what we call Position Encoding Generator (PEG), which can be seamlessly incorporated into the current transformer framework. Our new model with PEG is named Conditional Position encoding Visual Transformer (CPVT) and can naturally process the input sequences of arbitrary length. We demonstrate that CPVT can result in visually similar attention maps and even better performance than those with predefined positional encodings. We obtain state-of-the-art results on the ImageNet classification task compared with visual Transformers to date. Our code will be made available at https://github.com/Meituan-AutoML/CPVT .
Do We Really Need Explicit Position Encodings for Vision Transformers?
— AK (@ak92501) February 23, 2021
pdf: https://t.co/g3hqc9e2SU
abs: https://t.co/QCgOF9RFyA pic.twitter.com/ghvQaovcWE
8. A Theory of Label Propagation for Subpopulation Shift
Tianle Cai, Ruiqi Gao, Jason D. Lee, Qi Lei
One of the central problems in machine learning is domain adaptation. Unlike past theoretical work, we consider a new model for subpopulation shift in the input or representation space. In this work, we propose a provably effective framework for domain adaptation based on label propagation. In our analysis, we use a simple but realistic expansion” assumption, proposed in \citet{wei2021theoretical}. Using a teacher classifier trained on the source domain, our algorithm not only propagates to the target domain but also improves upon the teacher. By leveraging existing generalization bounds, we also obtain end-to-end finite-sample guarantees on the entire algorithm. In addition, we extend our theoretical framework to a more general setting of source-to-target transfer based on a third unlabeled dataset, which can be easily applied in various learning scenarios.
Subpopulation shift is a ubiquitous component of natural distribution shift. We propose a general theoretical framework of learning under subpopulation shift based on label propagation. And our insights can help to improve domain adaptation algorithms. https://t.co/Ou5mViBZNa pic.twitter.com/uSL4oMFVPX
— Tianle Cai (@tianle_cai) February 23, 2021
9. Towards Accurate and Compact Architectures via Neural Architecture Transformer
Yong Guo, Yin Zheng, Mingkui Tan, Qi Chen, Zhipeng Li, Jian Chen, Peilin Zhao, Junzhou Huang
Designing effective architectures is one of the key factors behind the success of deep neural networks. Existing deep architectures are either manually designed or automatically searched by some Neural Architecture Search (NAS) methods. However, even a well-designed/searched architecture may still contain many nonsignificant or redundant modules/operations. Thus, it is necessary to optimize the operations inside an architecture to improve the performance without introducing extra computational cost. To this end, we have proposed a Neural Architecture Transformer (NAT) method which casts the optimization problem into a Markov Decision Process (MDP) and seeks to replace the redundant operations with more efficient operations, such as skip or null connection. Note that NAT only considers a small number of possible transitions and thus comes with a limited search/transition space. As a result, such a small search space may hamper the performance of architecture optimization. To address this issue, we propose a Neural Architecture Transformer++ (NAT++) method which further enlarges the set of candidate transitions to improve the performance of architecture optimization. Specifically, we present a two-level transition rule to obtain valid transitions, i.e., allowing operations to have more efficient types (e.g., convolution->separable convolution) or smaller kernel sizes (e.g., 5x5->3x3). Note that different operations may have different valid transitions. We further propose a Binary-Masked Softmax (BMSoftmax) layer to omit the possible invalid transitions. Extensive experiments on several benchmark datasets show that the transformed architecture significantly outperforms both its original counterpart and the architectures optimized by existing methods.
Towards Accurate and Compact Architectures via Neural Architecture Transformer
— AK (@ak92501) February 23, 2021
pdf: https://t.co/fzGw2lw7YU
abs: https://t.co/hUL46bAQrf
github: https://t.co/PfRESfyZbR pic.twitter.com/OW6YD8cVEs
10. Kindergarden quantum mechanics graduates (…or how I learned to stop gluing LEGO together and love the ZX-calculus)
Bob Coecke, Dominic Horsman, Aleks Kissinger, Quanlong Wang
This paper is a spiritual child' of the 2005 lecture notes Kindergarten Quantum Mechanics, which showed how a simple, pictorial extension of Dirac notation allowed several quantum features to be easily expressed and derived, using language even a kindergartner can understand. Central to that approach was the use of pictures and pictorial transformation rules to understand and derive features of quantum theory and computation. However, this approach left many wondering
where’s the beef?’ In other words, was this new approach capable of producing new results, or was it simply an aesthetically pleasing way to restate stuff we already know? The aim of this sequel paper is to say here's the beef!', and highlight some of the major results of the approach advocated in Kindergarten Quantum Mechanics, and how they are being applied to tackle practical problems on real quantum computers. We will focus mainly on what has become the Swiss army knife of the pictorial formalism: the ZX-calculus. First we look at some of the ideas behind the ZX-calculus, comparing and contrasting it with the usual quantum circuit formalism. We then survey results from the past 2 years falling into three categories: (1) completeness of the rules of the ZX-calculus, (2) state-of-the-art quantum circuit optimisation results in commercial and open-source quantum compilers relying on ZX, and (3) the use of ZX in translating real-world stuff like natural language into quantum circuits that can be run on today's (very limited) quantum hardware. We also take the title literally, and outline an ongoing experiment aiming to show that ZX-calculus enables children to do cutting-edge quantum computing stuff. If anything, this would truly confirm that
kindergarten quantum mechanics’ wasn’t just a joke.
QIP 2008 had 10 long accepted talks, 20 shorter. Cites for long ones: 59, 57, 55, 28, 124, 47, 28, 22, 14, 52. Our ZX-calculus paper, now 500 cites and prominent in quantum industry, rejected with one-line mocking reviews. Who's laughing now QIP? :) https://t.co/y0YiYaRYXI pic.twitter.com/EDlEzbvfx3
— bOb cOeCke (@coecke) February 23, 2021
11. Rethinking Content and Style: Exploring Bias for Unsupervised Disentanglement
Xuanchi Ren, Tao Yang, Yuwang Wang, Wenjun Zeng
Content and style (C-S) disentanglement intends to decompose the underlying explanatory factors of objects into two independent subspaces. From the unsupervised disentanglement perspective, we rethink content and style and propose a formulation for unsupervised C-S disentanglement based on our assumption that different factors are of different importance and popularity for image reconstruction, which serves as a data bias. The corresponding model inductive bias is introduced by our proposed C-S disentanglement Module (C-S DisMo), which assigns different and independent roles to content and style when approximating the real data distributions. Specifically, each content embedding from the dataset, which encodes the most dominant factors for image reconstruction, is assumed to be sampled from a shared distribution across the dataset. The style embedding for a particular image, encoding the remaining factors, is used to customize the shared distribution through an affine transformation. The experiments on several popular datasets demonstrate that our method achieves the state-of-the-art unsupervised C-S disentanglement, which is comparable or even better than supervised methods. We verify the effectiveness of our method by downstream tasks: domain translation and single-view 3D reconstruction. Project page at https://github.com/xrenaa/CS-DisMo.
Rethinking Content and Style: Exploring Bias for Unsupervised Disentanglement
— AK (@ak92501) February 23, 2021
pdf: https://t.co/DLq8Xi0rST
abs: https://t.co/bZDcNhQTMG pic.twitter.com/ov62elU1JH
12. Reinforcement Learning with Prototypical Representations
Denis Yarats, Rob Fergus, Alessandro Lazaric, Lerrel Pinto
Learning effective representations in image-based environments is crucial for sample efficient Reinforcement Learning (RL). Unfortunately, in RL, representation learning is confounded with the exploratory experience of the agent — learning a useful representation requires diverse data, while effective exploration is only possible with coherent representations. Furthermore, we would like to learn representations that not only generalize across tasks but also accelerate downstream exploration for efficient task-specific training. To address these challenges we propose Proto-RL, a self-supervised framework that ties representation learning with exploration through prototypical representations. These prototypes simultaneously serve as a summarization of the exploratory experience of an agent as well as a basis for representing observations. We pre-train these task-agnostic representations and prototypes on environments without downstream task information. This enables state-of-the-art downstream policy learning on a set of difficult continuous control tasks.
Reinforcement Learning with Prototypical Representations
— AK (@ak92501) February 23, 2021
pdf: https://t.co/Czok4hOaI6
abs: https://t.co/a3vsdrrkvg
github: https://t.co/S3SQ5u84eh pic.twitter.com/j7M7Z8qEtP
13. VisualGPT: Data-efficient Image Captioning by Balancing Visual Input and Linguistic Knowledge from Pretraining
Jun Chen, Han Guo, Kai Yi, Boyang Li, Mohamed Elhoseiny
In this paper, we aim to improve the data efficiency of image captioning. We propose VisualGPT, a data-efficient image captioning model that leverages the linguistic knowledge from a large pretrained language model (LM). A crucial challenge is to balance between the use of visual information in the image and prior linguistic knowledge acquired from pretraining.We designed a novel self-resurrecting encoder-decoder attention mechanism to quickly adapt the pretrained LM as the language decoder on a small amount of in-domain training data. The pro-posed self-resurrecting activation unit produces sparse activations but is not susceptible to zero gradients. When trained on 0.1%, 0.5% and 1% of MSCOCO and Conceptual Captions, the proposed model, VisualGPT, surpasses strong image captioning baselines. VisualGPT outperforms the best baseline model by up to 10.8% CIDEr on MS COCO and up to 5.4% CIDEr on Conceptual Captions.We also perform a series of ablation studies to quantify the utility of each system component. To the best of our knowledge, this is the first work that improves data efficiency of image captioning by utilizing LM pretrained on unimodal data. Our code is available at: https://github.com/Vision-CAIR/VisualGPT.
VisualGPT: Data-efficient Image Captioning by Balancing Visual Input and Linguistic Knowledge from Pretraining
— AK (@ak92501) February 23, 2021
pdf: https://t.co/SnbbDfhg8V
abs: https://t.co/Oikhtc6pJt pic.twitter.com/IWVEyH5Abt
14. Position Information in Transformers: An Overview
Philipp Dufter, Martin Schmitt, Hinrich Schütze
Transformers are arguably the main workhorse in recent Natural Language Processing research. By definition a Transformer is invariant with respect to reorderings of the input. However, language is inherently sequential and word order is essential to the semantics and syntax of an utterance. In this paper, we provide an overview of common methods to incorporate position information into Transformer models. The objectives of this survey are to i) showcase that position information in Transformer is a vibrant and extensive research area; ii) enable the reader to compare existing methods by providing a unified notation and meaningful clustering; iii) indicate what characteristics of an application should be taken into account when selecting a position encoding; iv) provide stimuli for future research.
Have you ever thought about the fact that a Transformer w/o a position model sees language as a bag of words?
— Martin Schmitt (@mnschmit) February 23, 2021
We (i.e., @PDufter, @HinrichSchuetze, and myself) just finished the first version of a survey on different position models for Transformers.https://t.co/UQUfYFHNqj
1/3
15. MedAug: Contrastive learning leveraging patient metadata improves representations for chest X-ray interpretation
Yen Nhi Truong Vu, Richard Wang, Niranjan Balachandar, Can Liu, Andrew Y. Ng, Pranav Rajpurkar
Self-supervised contrastive learning between pairs of multiple views of the same image has been shown to successfully leverage unlabeled data to produce meaningful visual representations for both natural and medical images. However, there has been limited work on determining how to select pairs for medical images, where availability of patient metadata can be leveraged to improve representations. In this work, we develop a method to select positive pairs coming from views of possibly different images through the use of patient metadata. We compare strategies for selecting positive pairs for chest X-ray interpretation including requiring them to be from the same patient, imaging study or laterality. We evaluate downstream task performance by fine-tuning the linear layer on 1% of the labeled dataset for pleural effusion classification. Our best performing positive pair selection strategy, which involves using images from the same patient from the same study across all lateralities, achieves a performance increase of 3.4% and 14.4% in mean AUC from both a previous contrastive method and ImageNet pretrained baseline respectively. Our controlled experiments show that the keys to improving downstream performance on disease classification are (1) using patient metadata to appropriately create positive pairs from different images with the same underlying pathologies, and (2) maximizing the number of different images used in query pairing. In addition, we explore leveraging patient metadata to select hard negative pairs for contrastive learning, but do not find improvement over baselines that do not use metadata. Our method is broadly applicable to medical image interpretation and allows flexibility for incorporating medical insights in choosing pairs for contrastive learning.
Can we leverage patient metadata for contrastive learning with medical images?
— Pranav Rajpurkar (@pranavrajpurkar) February 23, 2021
Yes! We propose to treat images that share common properties (e.g. patient, study, laterality) as positive pairs.
Paper🎉 https://t.co/O6F78eoLeR@nhi_truongvu, @richcmwang, @Nir_Bala @StanfordAILab pic.twitter.com/KgrneBO01g
16. Learning Neural Network Subspaces
Mitchell Wortsman, Maxwell Horton, Carlos Guestrin, Ali Farhadi, Mohammad Rastegari
Recent observations have advanced our understanding of the neural network optimization landscape, revealing the existence of (1) paths of high accuracy containing diverse solutions and (2) wider minima offering improved performance. Previous methods observing diverse paths require multiple training runs. In contrast we aim to leverage both property (1) and (2) with a single method and in a single training run. With a similar computational cost as training one model, we learn lines, curves, and simplexes of high-accuracy neural networks. These neural network subspaces contain diverse solutions that can be ensembled, approaching the ensemble performance of independently trained networks without the training cost. Moreover, using the subspace midpoint boosts accuracy, calibration, and robustness to label noise, outperforming Stochastic Weight Averaging.
Instead of a single neural network, why not train lines, curves and simplexes in parameter space?
— Gabriel Ilharco (@gabriel_ilharco) February 23, 2021
Fantastic work by @Mitchnw et al. exploring how this idea can lead to more accurate and robust models: https://t.co/9VpB3TdJR6 pic.twitter.com/mHDmVTS39W
17. On Calibration and Out-of-domain Generalization
Yoav Wald, Amir Feder, Daniel Greenfeld, Uri Shalit
Out-of-domain (OOD) generalization is a significant challenge for machine learning models. To overcome it, many novel techniques have been proposed, often focused on learning models with certain invariance properties. In this work, we draw a link between OOD performance and model calibration, arguing that calibration across multiple domains can be viewed as a special case of an invariant representation leading to better OOD generalization. Specifically, we prove in a simplified setting that models which achieve multi-domain calibration are free of spurious correlations. This leads us to propose multi-domain calibration as a measurable surrogate for the OOD performance of a classifier. An important practical benefit of calibration is that there are many effective tools for calibrating classifiers. We show that these tools are easy to apply and adapt for a multi-domain setting. Using five datasets from the recently proposed WILDS OOD benchmark we demonstrate that simply re-calibrating models across multiple domains in a validation set leads to significantly improved performance on unseen test domains. We believe this intriguing connection between calibration and OOD generalization is promising from a practical point of view and deserves further research from a theoretical point of view.
*following thread is not COVID related*
— Uri Shalit (@ShalitUri) February 23, 2021
In a new preprint with @wald_yoav, @amir_feder and @d_greenfeld, we have some interesting findings about out-of-domain (OOD) generalization and its relation to the idea of model calibration https://t.co/ziIkPzCBXf
1/12
18. Style and Pose Control for Image Synthesis of Humans from a Single Monocular View
Kripasindhu Sarkar, Vladislav Golyanik, Lingjie Liu, Christian Theobalt
Photo-realistic re-rendering of a human from a single image with explicit control over body pose, shape and appearance enables a wide range of applications, such as human appearance transfer, virtual try-on, motion imitation, and novel view synthesis. While significant progress has been made in this direction using learning-based image generation tools, such as GANs, existing approaches yield noticeable artefacts such as blurring of fine details, unrealistic distortions of the body parts and garments as well as severe changes of the textures. We, therefore, propose a new method for synthesising photo-realistic human images with explicit control over pose and part-based appearance, i.e., StylePoseGAN, where we extend a non-controllable generator to accept conditioning of pose and appearance separately. Our network can be trained in a fully supervised way with human images to disentangle pose, appearance and body parts, and it significantly outperforms existing single image re-rendering methods. Our disentangled representation opens up further applications such as garment transfer, motion transfer, virtual try-on, head (identity) swap and appearance interpolation. StylePoseGAN achieves state-of-the-art image generation fidelity on common perceptual metrics compared to the current best-performing methods and convinces in a comprehensive user study.
Style and Pose Control for Image Synthesis of Humans from a Single Monocular View
— AK (@ak92501) February 23, 2021
pdf: https://t.co/xhG5t5kBgs
abs: https://t.co/I3xWxDQn3f pic.twitter.com/Df62NN0oq7
19. Social Diffusion Sources Can Escape Detection
Marcin Waniek, Manuel Cebrian, Petter Holme, Talal Rahwan
- retweets: 42, favorites: 33 (02/24/2021 10:27:48)
- links: abs | pdf
- cs.SI | physics.soc-ph
Influencing (and being influenced by) others indirectly through social networks is fundamental to all human societies. Whether this happens through the diffusion of rumors, viruses, opinions, or know-how, finding the source is of persistent interest to people and an algorithmic challenge of much current research interest. However, no study has considered the case of diffusion sources actively trying to avoid detection. By disregarding this assumption, we risk conflating intentional obfuscation from the fundamental limitations of source-finding algorithms. We close this gap by separating two mechanisms hiding diffusion sources-one stemming from the network topology itself and the other from strategic manipulation of the network. We find that identifying the source can be challenging even without foul play and, many times, it is easy to evade source-detection algorithms further. We show that hiding connections that were part of the viral cascade is far more effective than introducing fake individuals. Thus, efforts should focus on exposing concealed ties rather than planted fake entities, e.g., bots in social media; such exposure would drastically improve our chances of detecting the source of a social diffusion.
arXiv time! 📰
— Petter Holme (@pholme) February 23, 2021
Come for the visuals, stay for the proofs . . as we hammer the first and last nail in the coffin of non-adversary-aware source detection.https://t.co/N6UhCkwvsT
One of the most ambitious projects I've had the pleasure of taking part in. @mjwaniek @talalrahwan pic.twitter.com/snw0SVVwd9
20. Hamiltonian-Driven Shadow Tomography of Quantum States
Hong-Ye Hu, Yi-Zhuang You
Classical shadow tomography provides an efficient method for predicting functions of an unknown quantum state from a few measurements of the state. It relies on a unitary channel that efficiently scrambles the quantum information of the state to the measurement basis. Facing the challenge of realizing deep unitary circuits on near-term quantum devices, we explore the scenario in which the unitary channel can be shallow and is generated by a quantum chaotic Hamiltonian via time evolution. We provide an unbiased estimator of the density matrix for all ranges of the evolution time. We analyze the sample complexity of the Hamiltonian-driven shadow tomography. We find that it can be more efficient than the unitary-2-design-based shadow tomography in a sequence of intermediate time windows that range from an order-1 scrambling time to a time scale of , given the Hilbert space dimension . In particular, the efficiency of predicting diagonal observables is improved by a factor of without sacrificing the efficiency of predicting off-diagonal observables.
21. Gaussian Process Nowcasting: Application to COVID-19 Mortality Reporting
Iwona Hawryluk, Henrique Hoeltgebaum, Swapnil Mishra, Xenia Miscouridou, Ricardo P Schnekenberg, Charles Whittaker, Michaela Vollmer, Seth Flaxman, Samir Bhatt, Thomas A Mellan
Updating observations of a signal due to the delays in the measurement process is a common problem in signal processing, with prominent examples in a wide range of fields. An important example of this problem is the nowcasting of COVID-19 mortality: given a stream of reported counts of daily deaths, can we correct for the delays in reporting to paint an accurate picture of the present, with uncertainty? Without this correction, raw data will often mislead by suggesting an improving situation. We present a flexible approach using a latent Gaussian process that is capable of describing the changing auto-correlation structure present in the reporting time-delay surface. This approach also yields robust estimates of uncertainty for the estimated nowcasted numbers of deaths. We test assumptions in model specification such as the choice of kernel or hyper priors, and evaluate model performance on a challenging real dataset from Brazil. Our experiments show that Gaussian process nowcasting performs favourably against both comparable methods, and a small sample of expert human predictions. Our approach has substantial practical utility in disease modelling — by applying our approach to COVID-19 mortality data from Brazil, where reporting delays are large, we can make informative predictions on important epidemiological quantities such as the current effective reproduction number.