1. Energy-based Out-of-distribution Detection
Weitang Liu, Xiaoyun Wang, John D. Owens, Yixuan Li
Determining whether inputs are out-of-distribution (OOD) is an essential building block for safely deploying machine learning models in the open world. However, previous methods relying on the softmax confidence score suffer from overconfident posterior distributions for OOD data. We propose a unified framework for OOD detection that uses an energy score. We show that energy scores better distinguish in- and out-of-distribution samples than the traditional approach using the softmax scores. Unlike softmax confidence scores, energy scores are theoretically aligned with the probability density of the inputs and are less susceptible to the overconfidence issue. Within this framework, energy can be flexibly used as a scoring function for any pre-trained neural classifier as well as a trainable cost function to shape the energy surface explicitly for OOD detection. On a CIFAR-10 pre-trained WideResNet, using the energy score reduces the average FPR (at TPR 95%) by 18.03% compared to the softmax confidence score. With energy-based training, our method outperforms the state-of-the-art on common benchmarks.
Suffering from overconfident softmax scores? Time to use energy scores!
— Sharon Y. Li (@SharonYixuanLi) October 9, 2020
Excited to release our NeurIPS paper on "Energy-based Out-of-distribution Detection", a theoretically motivated framework for OOD detection. 1/n
Paper: https://t.co/0DOLbUR8D5 (w/ code included) pic.twitter.com/OSwiJlcfPA
2. Large Product Key Memory for Pretrained Language Models
Gyuwan Kim, Tae-Hwan Jung
Product key memory (PKM) proposed by Lample et al. (2019) enables to improve prediction accuracy by increasing model capacity efficiently with insignificant computational overhead. However, their empirical application is only limited to causal language modeling. Motivated by the recent success of pretrained language models (PLMs), we investigate how to incorporate large PKM into PLMs that can be finetuned for a wide variety of downstream NLP tasks. We define a new memory usage metric, and careful observation using this metric reveals that most memory slots remain outdated during the training of PKM-augmented models. To train better PLMs by tackling this issue, we propose simple but effective solutions: (1) initialization from the model weights pretrained without memory and (2) augmenting PKM by addition rather than replacing a feed-forward network. We verify that both of them are crucial for the pretraining of PKM-augmented PLMs, enhancing memory utilization and downstream performance. Code and pretrained weights are available at https://github.com/clovaai/pkm-transformers.
Last year, we showed that you can outperform a 24-layer transformer in language modeling with just 12 layers and 1 Product-key memory layer. https://t.co/wjZvgBdgbh show that these results also transfer to downstream tasks: BERT large performance with a PKM-augmented BERT base! https://t.co/ORYsJSdJVL
— Guillaume Lample (@GuillaumeLample) October 9, 2020
3. What Can We Do to Improve Peer Review in NLP?
Anna Rogers, Isabelle Augenstein
Peer review is our best tool for judging the quality of conference submissions, but it is becoming increasingly spurious. We argue that a part of the problem is that the reviewers and area chairs face a poorly defined task forcing apples-to-oranges comparisons. There are several potential ways forward, but the key difficulty is creating the incentives and mechanisms for their consistent implementation in the NLP community.
New paper📜: What Can We Do to Improve Peer Review in NLP? https://t.co/GW8pzbIyLv
— Anna Rogers (@annargrs) October 9, 2020
with @IAugenstein
TLDR: In its current form, peer review is a poorly defined task with apples-to-oranges comparisons and unrealistic expectations. /1 pic.twitter.com/pETzijb3OX
4. Online Safety Assurance for Deep Reinforcement Learning
Noga H. Rotman, Michael Schapira, Aviv Tamar
Recently, deep learning has been successfully applied to a variety of networking problems. A fundamental challenge is that when the operational environment for a learning-augmented system differs from its training environment, such systems often make badly informed decisions, leading to bad performance. We argue that safely deploying learning-driven systems requires being able to determine, in real time, whether system behavior is coherent, for the purpose of defaulting to a reasonable heuristic when this is not so. We term this the online safety assurance problem (OSAP). We present three approaches to quantifying decision uncertainty that differ in terms of the signal used to infer uncertainty. We illustrate the usefulness of online safety assurance in the context of the proposed deep reinforcement learning (RL) approach to video streaming. While deep RL for video streaming bests other approaches when the operational and training environments match, it is dominated by simple heuristics when the two differ. Our preliminary findings suggest that transitioning to a default policy when decision uncertainty is detected is key to enjoying the performance benefits afforded by leveraging ML without compromising on safety.
Online Safety Assurance for Deep Reinforcement Learning. #MachineLearning #DataScience #ArtificialIntelligence #BigData #Analytics #RStats #Python #Java #JavaScript #ReactJS #Serverless #IoT #Linux #Coding #Programming #DataScientist #AI #DeepLearninghttps://t.co/G8x7OFb3UQ pic.twitter.com/enz16SJACS
— Marcus Borba (@marcusborba) October 9, 2020
5. Olympus: a benchmarking framework for noisy optimization and experiment planning
Florian Häse, Matteo Aldeghi, Riley J. Hickman, Loïc M. Roch, Melodie Christensen, Elena Liles, Jason E. Hein, Alán Aspuru-Guzik
- retweets: 680, favorites: 112 (10/11/2020 10:22:31)
- links: abs | pdf
- stat.ML | cs.LG | physics.chem-ph
Research challenges encountered across science, engineering, and economics can frequently be formulated as optimization tasks. In chemistry and materials science, recent growth in laboratory digitization and automation has sparked interest in optimization-guided autonomous discovery and closed-loop experimentation. Experiment planning strategies based on off-the-shelf optimization algorithms can be employed in fully autonomous research platforms to achieve desired experimentation goals with the minimum number of trials. However, the experiment planning strategy that is most suitable to a scientific discovery task is a priori unknown while rigorous comparisons of different strategies are highly time and resource demanding. As optimization algorithms are typically benchmarked on low-dimensional synthetic functions, it is unclear how their performance would translate to noisy, higher-dimensional experimental tasks encountered in chemistry and materials science. We introduce Olympus, a software package that provides a consistent and easy-to-use framework for benchmarking optimization algorithms against realistic experiments emulated via probabilistic deep-learning models. Olympus includes a collection of experimentally derived benchmark sets from chemistry and materials science and a suite of experiment planning strategies that can be easily accessed via a user-friendly python interface. Furthermore, Olympus facilitates the integration, testing, and sharing of custom algorithms and user-defined datasets. In brief, Olympus mitigates the barriers associated with benchmarking optimization algorithms on realistic experimental scenarios, promoting data sharing and the creation of a standard framework for evaluating the performance of experiment planning strategies
We are ready to share the preprint of #Olympus a #machinelearning software platform for benchmarking optimization algorithms in noisy surfaces. #matterlab @VectorInst @chemuoft @uoft @ubc New tool for #selfdrivinglabs https://t.co/FvzHhTkUOZ
— Alan Aspuru-Guzik (@A_Aspuru_Guzik) October 9, 2020
6. DiffTune: Optimizing CPU Simulator Parameters with Learned Differentiable Surrogates
Alex Renda, Yishen Chen, Charith Mendis, Michael Carbin
CPU simulators are useful tools for modeling CPU execution behavior. However, they suffer from inaccuracies due to the cost and complexity of setting their fine-grained parameters, such as the latencies of individual instructions. This complexity arises from the expertise required to design benchmarks and measurement frameworks that can precisely measure the values of parameters at such fine granularity. In some cases, these parameters do not necessarily have a physical realization and are therefore fundamentally approximate, or even unmeasurable. In this paper we present DiffTune, a system for learning the parameters of x86 basic block CPU simulators from coarse-grained end-to-end measurements. Given a simulator, DiffTune learns its parameters by first replacing the original simulator with a differentiable surrogate, another function that approximates the original function; by making the surrogate differentiable, DiffTune is then able to apply gradient-based optimization techniques even when the original function is non-differentiable, such as is the case with CPU simulators. With this differentiable surrogate, DiffTune then applies gradient-based optimization to produce values of the simulator’s parameters that minimize the simulator’s error on a dataset of ground truth end-to-end performance measurements. Finally, the learned parameters are plugged back into the original simulator. DiffTune is able to automatically learn the entire set of microarchitecture-specific parameters within the Intel x86 simulation model of llvm-mca, a basic block CPU simulator based on LLVM’s instruction scheduling model. DiffTune’s learned parameters lead llvm-mca to an average error that not only matches but lowers that of its original, expert-provided parameter values.
In this paper is presented DiffTune, a system for learning the parameters of x86 basic block CPU simulators from coarse-grained end-to-end measurements, showing be able to learn the entire set of 11265 μarch-specific parameters from scratch in LLVM-MCA.https://t.co/jIMSXlMDRd pic.twitter.com/nYnl4JQsOp
— Underfox (@Underfox3) October 10, 2020
7. Maximum Reward Formulation In Reinforcement Learning
Sai Krishna Gottipati, Yashaswi Pathak, Rohan Nuttall, Sahir, Raviteja Chunduru, Ahmed Touati, Sriram Ganapathi Subramanian, Matthew E. Taylor, Sarath Chandar
Reinforcement learning (RL) algorithms typically deal with maximizing the expected cumulative return (discounted or undiscounted, finite or infinite horizon). However, several crucial applications in the real world, such as drug discovery, do not fit within this framework because an RL agent only needs to identify states (molecules) that achieve the highest reward within a trajectory and does not need to optimize for the expected cumulative return. In this work, we formulate an objective function to maximize the expected maximum reward along a trajectory, derive a novel functional form of the Bellman equation, introduce the corresponding Bellman operators, and provide a proof of convergence. Using this formulation, we achieve state-of-the-art results on the task of molecule generation that mimics a real-world drug discovery pipeline.
Our new work on maximum reward formulation in #RL is out: https://t.co/xZdjiRR308 We formulate the objective function to maximize the expected maximum reward in a trajectory (instead of the traditional expected cumulative return), derive a new functional form of the Bellman 1/n
— Sai Krishna G.V. (@saikrishna_gvs) October 9, 2020
8. Flipping the Perspective in Contact Tracing
Po-Shen Loh
Contact tracing has been a widely-discussed technique for controlling COVID-19. The traditional test-trace-isolate-support paradigm focuses on identifying people after they have been exposed to positive individuals, and isolating them to protect others. This article introduces an alternative and complementary approach, which appears to be the first to notify people before exposure happens, in the context of their interaction network, so that they can directly take actions to avoid exposure themselves, without using personally identifiable information. Our system has just become achievable with present technology: for each positive case, do not only notify their direct contacts, but inform thousands of people of how far away they are from the positive case, as measured in network-theoretic distance in their physical relationship network. This fundamentally different approach has already been deployed in a publicly downloadable app. It brings a new tool to bear on the pandemic, powered by network theory. Like a weather satellite providing early warning of incoming hurricanes, it empowers individuals to see transmission approaching from far away, and to directly avoid exposure in the first place. This flipped perspective engages natural self-interested instincts of self-preservation, reducing reliance on altruism. Consequently, our new system could solve the behavior coordination problem which has hampered many other app-based interventions to date. We also provide a heuristic mathematical analysis that shows how our system already achieves critical mass from the user perspective at very low adoption thresholds (likely below 10% in some common types of communities as indicated empirically in the first practical deployment); after that point, the design of our system naturally accelerates further adoption, while also alerting even non-users of the app.
Wrote analysis for new & fundamentally different (much more powerful) approach to digital #ContactTracing. #COVID radar: for each case, don't only tell direct contacts, but anonymously tell everyone how many relationships away they are from it! https://t.co/8ECHpffRW1#mathchat
— Po-Shen Loh (@PoShenLoh) October 9, 2020
9. Text-based RL Agents with Commonsense Knowledge: New Challenges, Environments and Baselines
Keerthiram Murugesan, Mattia Atzeni, Pavan Kapanipathi, Pushkar Shukla, Sadhana Kumaravel, Gerald Tesauro, Kartik Talamadupula, Mrinmaya Sachan, Murray Campbell
Text-based games have emerged as an important test-bed for Reinforcement Learning (RL) research, requiring RL agents to combine grounded language understanding with sequential decision making. In this paper, we examine the problem of infusing RL agents with commonsense knowledge. Such knowledge would allow agents to efficiently act in the world by pruning out implausible actions, and to perform look-ahead planning to determine how current actions might affect future world states. We design a new text-based gaming environment called TextWorld Commonsense (TWC) for training and evaluating RL agents with a specific kind of commonsense knowledge about objects, their attributes, and affordances. We also introduce several baseline RL agents which track the sequential context and dynamically retrieve the relevant commonsense knowledge from ConceptNet. We show that agents which incorporate commonsense knowledge in TWC perform better, while acting more efficiently. We conduct user-studies to estimate human performance on TWC and show that there is ample room for future improvement.
Good evening. Did you know that the principal research scientist behind the Deep Blue chess playing agent is now working on Text-Adventure game playing agents? https://t.co/2dzmHLJ95m
— Mark O. Riedl (@mark_riedl) October 9, 2020
Welcome @murraycampbell and watch out for Grues! 😄 pic.twitter.com/Z4d6p7iFNz
10. MolDesigner: Interactive Design of Efficacious Drugs with Deep Learning
Kexin Huang, Tianfan Fu, Dawood Khan, Ali Abid, Ali Abdalla, Abubakar Abid, Lucas M. Glass, Marinka Zitnik, Cao Xiao, Jimeng Sun
The efficacy of a drug depends on its binding affinity to the therapeutic target and pharmacokinetics. Deep learning (DL) has demonstrated remarkable progress in predicting drug efficacy. We develop MolDesigner, a human-in-the-loop web user-interface (UI), to assist drug developers leverage DL predictions to design more effective drugs. A developer can draw a drug molecule in the interface. In the backend, more than 17 state-of-the-art DL models generate predictions on important indices that are crucial for a drug’s efficacy. Based on these predictions, drug developers can edit the drug molecule and reiterate until satisfaction. MolDesigner can make predictions in real-time with a latency of less than a second.
MolDesigner is in #NeurIPS2020 Demo!
— Kexin Huang (@KexinHuang5) October 10, 2020
-Interactive molecule design with DL, powered by DeepPurpose and @GradioML!
-Predict binding affinity and 17 ADMET properties from 50+ DL models!
-Less than 1 sec latency!
Video: https://t.co/qx3p3hkAkh
Paper: https://t.co/weUTZKYCT5 pic.twitter.com/3GTEUUvuRO
11. Fast Stencil-Code Computation on a Wafer-Scale Processor
Kamil Rocki, Dirk Van Essendelft, Ilya Sharapov, Robert Schreiber, Michael Morrison, Vladimir Kibardin, Andrey Portnoy, Jean Francois Dietiker, Madhava Syamlal, Michael James
The performance of CPU-based and GPU-based systems is often low for PDE codes, where large, sparse, and often structured systems of linear equations must be solved. Iterative solvers are limited by data movement, both between caches and memory and between nodes. Here we describe the solution of such systems of equations on the Cerebras Systems CS-1, a wafer-scale processor that has the memory bandwidth and communication latency to perform well. We achieve 0.86 PFLOPS on a single wafer-scale system for the solution by BiCGStab of a linear system arising from a 7-point finite difference stencil on a 600 X 595 X 1536 mesh, achieving about one third of the machine’s peak performance. We explain the system, its architecture and programming, and its performance on this problem and related problems. We discuss issues of memory capacity and floating point precision. We outline plans to extend this work towards full applications.
12. Detecting Fine-Grained Cross-Lingual Semantic Divergences without Supervision by Learning to Rank
Eleftheria Briakou, Marine Carpuat
Detecting fine-grained differences in content conveyed in different languages matters for cross-lingual NLP and multilingual corpora analysis, but it is a challenging machine learning problem since annotation is expensive and hard to scale. This work improves the prediction and annotation of fine-grained semantic divergences. We introduce a training strategy for multilingual BERT models by learning to rank synthetic divergent examples of varying granularity. We evaluate our models on the Rationalized English-French Semantic Divergences, a new dataset released with this work, consisting of English-French sentence-pairs annotated with semantic divergence classes and token-level rationales. Learning to rank helps detect fine-grained sentence-level divergences more accurately than a strong sentence-level similarity model, while token-level predictions have the potential of further distinguishing between coarse and fine-grained divergences.
New #emnlp2020 paper w/ @MarineCarpuat at @umdclip on "Detecting Fine-grained Cross-lingual Semantic Divergences without supervision by Learning to Rank" is now on arxiv: https://t.co/bVPjw9UZDq
— Eleftheria Briakou (@ebriakou) October 9, 2020
Code and data available: https://t.co/PpPxLnqjc5
13. Robust Semi-Supervised Learning with Out of Distribution Data
Xujiang Zhao, Killamsetty Krishnateja, Rishabh Iyer, Feng Chen
Semi-supervised learning (SSL) based on deep neural networks (DNNs) has recently been proven effective. However, recent work [Oliver et al., 2018] shows that the performance of SSL could degrade substantially when the unlabeled set has out-of-distribution examples (OODs). In this work, we first study the key causes about the negative impact of OOD on SSL. We found that (1) OODs close to the decision boundary have a larger effect on the performance of existing SSL algorithms than the OODs far away from the decision boundary and (2) Batch Normalization (BN), a popular module in deep networks, could degrade the performance of a DNN for SSL substantially when the unlabeled set contains OODs. To address these causes, we proposed a novel unified robust SSL approach for many existing SSL algorithms in order to improve their robustness against OODs. In particular, we proposed a simple modification to batch normalization, called weighted batch normalization, capable of improving the robustness of BN against OODs. We developed two efficient hyperparameter optimization algorithms that have different tradeoffs in computational efficiency and accuracy. The first is meta-approximation and the second is implicit-differentiation based approximation. Both algorithms learn to reweight the unlabeled samples in order to improve the robustness of SSL against OODs. Extensive experiments on both synthetic and real-world datasets demonstrate that our proposed approach significantly improves the robustness of four representative SSL algorithms against OODs, in comparison with four state-of-the-art robust SSL approaches. We performed an ablation study to demonstrate which components of our approach are most important for its success.