1. Refinement Types: A Tutorial
Ranjit Jhala, Niki Vazou
Refinement types enrich a language’s type system with logical predicates that circumscribe the set of values described by the type, thereby providing software developers a tunable knob with which to inform the type system about what invariants and correctness properties should be checked on their code. In this article, we distill the ideas developed in the substantial literature on refinement types into a unified tutorial that explains the key ingredients of modern refinement type systems. In particular, we show how to implement a refinement type checker via a progression of languages that incrementally add features to the language or type system.
I’m excited (and TBH, exhausted) to report that @nikivazou and I wrote a “nanopass style” tutorial on how to implement refinement type checkers
— Ranjit "enough!" Jhala (@RanjitJhala) October 16, 2020
Preprint: https://t.co/5D7mmgmE0P
Code: https://t.co/I1Y7nezHxR
Comments etc most welcome!
2. NeRF++: Analyzing and Improving Neural Radiance Fields
Kai Zhang, Gernot Riegler, Noah Snavely, Vladlen Koltun
Neural Radiance Fields (NeRF) achieve impressive view synthesis results for a variety of capture settings, including 360 capture of bounded scenes and forward-facing capture of bounded and unbounded scenes. NeRF fits multi-layer perceptrons (MLPs) representing view-invariant opacity and view-dependent color volumes to a set of training images, and samples novel views based on volume rendering techniques. In this technical report, we first remark on radiance fields and their potential ambiguities, namely the shape-radiance ambiguity, and analyze NeRF’s success in avoiding such ambiguities. Second, we address a parametrization issue involved in applying NeRF to 360 captures of objects within large-scale, unbounded 3D scenes. Our method improves view synthesis fidelity in this challenging scenario. Code is available at https://github.com/Kai-46/nerfplusplus.
NeRF++: Analyzing and Improving Neural Radiance Fields
— AK (@ak92501) October 16, 2020
pdf: https://t.co/AQO0q1WvCi
abs: https://t.co/iN5w7O6CFW
github: https://t.co/4nUd3hGJ6n pic.twitter.com/sFe91zVoH6
3. Formalizing Trust in Artificial Intelligence: Prerequisites, Causes and Goals of Human Trust in AI
Alon Jacovi, Ana Marasović, Tim Miller, Yoav Goldberg
Trust is a central component of the interaction between people and AI, in that ‘incorrect’ levels of trust may cause misuse, abuse or disuse of the technology. But what, precisely, is the nature of trust in AI? What are the prerequisites and goals of the cognitive mechanism of trust, and how can we cause these prerequisites and goals, or assess whether they are being satisfied in a given interaction? This work aims to answer these questions. We discuss a model of trust inspired by, but not identical to, sociology’s interpersonal trust (i.e., trust between people). This model rests on two key properties of the vulnerability of the user and the ability to anticipate the impact of the AI model’s decisions. We incorporate a formalization of ‘contractual trust’, such that trust between a user and an AI is trust that some implicit or explicit contract will hold, and a formalization of ‘trustworthiness’ (which detaches from the notion of trustworthiness in sociology), and with it concepts of ‘warranted’ and ‘unwarranted’ trust. We then present the possible causes of warranted trust as intrinsic reasoning and extrinsic behavior, and discuss how to design trustworthy AI, how to evaluate whether trust has manifested, and whether it is warranted. Finally, we elucidate the connection between trust and XAI using our formalization.
"We want to increase the user's trust in the model," or "we want a more trustworthy model" - you probably saw this sentiment in many papers. But what exactly does this mean?
— Alon Jacovi (@alon_jacovi) October 16, 2020
New paper! --> https://t.co/DBos4T6AL5 @trustworthy_ml
With @anmarasovic @tmiller_unimelb @yoavgo pic.twitter.com/VUQM8u0H1P
4. MOTChallenge: A Benchmark for Single-camera Multiple Target Tracking
Patrick Dendorfer, Aljoša Ošep, Anton Milan, Konrad Schindler, Daniel Cremers, Ian Reid, Stefan Roth, Laura Leal-Taixé
Standardized benchmarks have been crucial in pushing the performance of computer vision algorithms, especially since the advent of deep learning. Although leaderboards should not be over-claimed, they often provide the most objective measure of performance and are therefore important guides for research. We present MOTChallenge, a benchmark for single-camera Multiple Object Tracking (MOT) launched in late 2014, to collect existing and new data, and create a framework for the standardized evaluation of multiple object tracking methods. The benchmark is focused on multiple people tracking, since pedestrians are by far the most studied object in the tracking community, with applications ranging from robot navigation to self-driving cars. This paper collects the first three releases of the benchmark: (i) MOT15, along with numerous state-of-the-art results that were submitted in the last years, (ii) MOT16, which contains new challenging videos, and (iii) MOT17, that extends MOT16 sequences with more precise labels and evaluates tracking performance on three different object detectors. The second and third release not only offers a significant increase in the number of labeled boxes but also provide labels for multiple object classes beside pedestrians, as well as the level of visibility for every single object of interest. We finally provide a categorization of state-of-the-art trackers and a broad error analysis. This will help newcomers understand the related work and research trends in the MOT community, and hopefully shred some light into potential future research directions.
6 years later it has arrived! @MOTChallenge to appear at IJCV, with new analysis of state-of-the-art trackers. If you are new to MOT or your are struggling to keep up with the literature, give it a look! https://t.co/KoFJBpznHR @PatrickDendorf1 @AljosaOsep @antonmil @stefanroth
— Laura Leal-Taixe (@lealtaixe) October 16, 2020
5. Probabilistic Time Series Forecasting with Structured Shape and Temporal Diversity
Vincent Le Guen, Nicolas Thome
Probabilistic forecasting consists in predicting a distribution of possible future outcomes. In this paper, we address this problem for non-stationary time series, which is very challenging yet crucially important. We introduce the STRIPE model for representing structured diversity based on shape and time features, ensuring both probable predictions while being sharp and accurate. STRIPE is agnostic to the forecasting model, and we equip it with a diversification mechanism relying on determinantal point processes (DPP). We introduce two DPP kernels for modeling diverse trajectories in terms of shape and time, which are both differentiable and proved to be positive semi-definite. To have an explicit control on the diversity structure, we also design an iterative sampling mechanism to disentangle shape and time representations in the latent space. Experiments carried out on synthetic datasets show that STRIPE significantly outperforms baseline methods for representing diversity, while maintaining accuracy of the forecasting model. We also highlight the relevance of the iterative sampling scheme and the importance to use different criteria for measuring quality and diversity. Finally, experiments on real datasets illustrate that STRIPE is able to outperform state-of-the-art probabilistic forecasting approaches in the best sample prediction.
Probabilistic Time Series Forecasting with Structured Shape and Temporal Diversity. https://t.co/J9inBUDut9 pic.twitter.com/NQggQJ8KMu
— arxiv (@arxiv_org) October 16, 2020
6. Interpretable Machine Learning with an Ensemble of Gradient Boosting Machines
Andrei V. Konstantinov, Lev V. Utkin
A method for the local and global interpretation of a black-box model on the basis of the well-known generalized additive models is proposed. It can be viewed as an extension or a modification of the algorithm using the neural additive model. The method is based on using an ensemble of gradient boosting machines (GBMs) such that each GBM is learned on a single feature and produces a shape function of the feature. The ensemble is composed as a weighted sum of separate GBMs resulting a weighted sum of shape functions which form the generalized additive model. GBMs are built in parallel using randomized decision trees of depth 1, which provide a very simple architecture. Weights of GBMs as well as features are computed in each iteration of boosting by using the Lasso method and then updated by means of a specific smoothing procedure. In contrast to the neural additive model, the method provides weights of features in the explicit form, and it is simply trained. A lot of numerical experiments with an algorithm implementing the proposed method on synthetic and real datasets demonstrate its efficiency and properties for local and global interpretation.
7. Neograd: gradient descent with an adaptive learning rate
Michael F. Zimmer
Since its inception by Cauchy in 1847, the gradient descent algorithm has been without guidance as to how to efficiently set the learning rate. This paper identifies a concept, defines metrics, and introduces algorithms to provide such guidance. The result is a family of algorithms (Neograd) based on a {\em constant ansatz}, where is a metric based on the error of the updates. This allows one to adjust the learning rate at each step, using a formulaic estimate based on . It is now no longer necessary to do trial runs beforehand to estimate a single learning rate for an entire optimization run. The additional costs to operate this metric are trivial. One member of this family of algorithms, NeogradM, can quickly reach much lower cost function values than other first order algorithms. Comparisons are made mainly between NeogradM and Adam on an array of test functions and on a neural network model for identifying hand-written digits. The results show great performance improvements with NeogradM.
Hah, what a day! This (Adabelief) is the 2nd new optimizer I discovered today. Next to the freshly uploaded Neograd, which just saw on arXiv earlier today: https://t.co/8I6x3ILv64 (the saying goes "all good things come in threes" right?). GitHub repo here: https://t.co/AcIyWx609a https://t.co/2jZk6PVvLX
— Sebastian Raschka (@rasbt) October 16, 2020
8. AdaBelief Optimizer: Adapting Stepsizes by the Belief in Observed Gradients
Juntang Zhuang, Tommy Tang, Sekhar Tatikonda, Nicha Dvornek, Yifan Ding, Xenophon Papademetris, James S. Duncan
Most popular optimizers for deep learning can be broadly categorized as adaptive methods (e.g. Adam) and accelerated schemes (e.g. stochastic gradient descent (SGD) with momentum). For many models such as convolutional neural networks (CNNs), adaptive methods typically converge faster but generalize worse compared to SGD; for complex settings such as generative adversarial networks (GANs), adaptive methods are typically the default because of their stability.We propose AdaBelief to simultaneously achieve three goals: fast convergence as in adaptive methods, good generalization as in SGD, and training stability. The intuition for AdaBelief is to adapt the stepsize according to the “belief” in the current gradient direction. Viewing the exponential moving average (EMA) of the noisy gradient as the prediction of the gradient at the next time step, if the observed gradient greatly deviates from the prediction, we distrust the current observation and take a small step; if the observed gradient is close to the prediction, we trust it and take a large step. We validate AdaBelief in extensive experiments, showing that it outperforms other methods with fast convergence and high accuracy on image classification and language modeling. Specifically, on ImageNet, AdaBelief achieves comparable accuracy to SGD. Furthermore, in the training of a GAN on Cifar10, AdaBelief demonstrates high stability and improves the quality of generated samples compared to a well-tuned Adam optimizer. Code is available at https://github.com/juntang-zhuang/Adabelief-Optimizer
#NeurIPS2020 Spotlight paper. Delighted to share our AdaBelief optimizer, which trains fast as Adam, generalize well as SGD, stable to train GANs.
— Juntang Zhuang (@JuntangZhuang) October 16, 2020
Paper: https://t.co/xzQPmd4x0V
Project page: https://t.co/YpWkXcGpb6
Code: https://t.co/Bcd3ljAqZ4
9. Natural Language Rationales with Full-Stack Visual Reasoning: From Pixels to Semantic Frames to Commonsense Graphs
Ana Marasović, Chandra Bhagavatula, Jae Sung Park, Ronan Le Bras, Noah A. Smith, Yejin Choi
Natural language rationales could provide intuitive, higher-level explanations that are easily understandable by humans, complementing the more broadly studied lower-level explanations based on gradients or attention weights. We present the first study focused on generating natural language rationales across several complex visual reasoning tasks: visual commonsense reasoning, visual-textual entailment, and visual question answering. The key challenge of accurate rationalization is comprehensive image understanding at all levels: not just their explicit content at the pixel level, but their contextual contents at the semantic and pragmatic levels. We present Rationale^VT Transformer, an integrated model that learns to generate free-text rationales by combining pretrained language models with object recognition, grounded visual semantic frames, and visual commonsense graphs. Our experiments show that the base pretrained language model benefits from visual adaptation and that free-text rationalization is a promising research direction to complement model interpretability for complex visual-textual reasoning tasks.
📢 New at Findings #EMNLP2020 📢
— Ana Marasović (@anmarasovic) October 16, 2020
"Natural Language Rationales with Full-Stack Visual Reasoning: From Pixels to Semantic Frames to Commonsense Graphs"
w/ @_csBhagav @jae_sung_park96 @Ronan_LeBras @nlpnoah @YejinChoinka
📖 Paper: https://t.co/TuyQTRPjFV
Thread 👇 pic.twitter.com/01XUgIwBYW
10. A Deep Learning Framework for Predicting Digital Asset Price Movement from Trade-by-trade Data
Qi Zhao
- retweets: 64, favorites: 32 (10/17/2020 09:14:32)
- links: abs | pdf
- q-fin.ST | cs.AI | cs.LG | q-fin.TR
This paper presents a deep learning framework based on Long Short-term Memory Network(LSTM) that predicts price movement of cryptocurrencies from trade-by-trade data. The main focus of this study is on predicting short-term price changes in a fixed time horizon from a looking back period. By carefully designing features and detailed searching for best hyper-parameters, the model is trained to achieve high performance on nearly a year of trade-by-trade data. The optimal model delivers stable high performance(over 60% accuracy) on out-of-sample test periods. In a realistic trading simulation setting, the prediction made by the model could be easily monetized. Moreover, this study shows that the LSTM model could extract universal features from trade-by-trade data, as the learned parameters well maintain their high performance on other cryptocurrency instruments that were not included in training data. This study exceeds existing researches in term of the scale and precision of data used, as well as the high prediction accuracy achieved.
A Deep Learning Framework for Predicting Digital Asset Price Movement from Trade-by-trade Data https://t.co/zPbgsgRJVV
— 🗡🕷 (@StunLikes) October 16, 2020
11. Avoiding Side Effects By Considering Future Tasks
Victoria Krakovna, Laurent Orseau, Richard Ngo, Miljan Martic, Shane Legg
Designing reward functions is difficult: the designer has to specify what to do (what it means to complete the task) as well as what not to do (side effects that should be avoided while completing the task). To alleviate the burden on the reward designer, we propose an algorithm to automatically generate an auxiliary reward function that penalizes side effects. This auxiliary objective rewards the ability to complete possible future tasks, which decreases if the agent causes side effects during the current task. The future task reward can also give the agent an incentive to interfere with events in the environment that make future tasks less achievable, such as irreversible actions by other agents. To avoid this interference incentive, we introduce a baseline policy that represents a default course of action (such as doing nothing), and use it to filter out future tasks that are not achievable by default. We formally define interference incentives and show that the future task approach with a baseline policy avoids these incentives in the deterministic case. Using gridworld environments that test for side effects and interference, we show that our method avoids interference and is more effective for avoiding side effects than the common approach of penalizing irreversible actions.
Our paper "Avoiding Side Effects By Considering Future Tasks" has been accepted to @NeurIPSConf 2020! This is joint work with @LaurentOrseau @RichardMCNgo @MiljanMartic and @ShaneLegg, laying some theoretical groundwork for the side effects problem. https://t.co/mYOvYDGWy7
— Victoria Krakovna (@vkrakovna) October 16, 2020
12. Rainfall-Runoff Prediction at Multiple Timescales with a Single Long Short-Term Memory Network
Martin Gauch, Frederik Kratzert, Daniel Klotz, Grey Nearing, Jimmy Lin, Sepp Hochreiter
- retweets: 36, favorites: 15 (10/17/2020 09:14:32)
- links: abs | pdf
- cs.LG | physics.ao-ph
Long Short-Term Memory Networks (LSTMs) have been applied to daily discharge prediction with remarkable success. Many practical scenarios, however, require predictions at more granular timescales. For instance, accurate prediction of short but extreme flood peaks can make a life-saving difference, yet such peaks may escape the coarse temporal resolution of daily predictions. Naively training an LSTM on hourly data, however, entails very long input sequences that make learning hard and computationally expensive. In this study, we propose two Multi-Timescale LSTM (MTS-LSTM) architectures that jointly predict multiple timescales within one model, as they process long-past inputs at a single temporal resolution and branch out into each individual timescale for more recent input steps. We test these models on 516 basins across the continental United States and benchmark against the US National Water Model. Compared to naive prediction with a distinct LSTM per timescale, the multi-timescale architectures are computationally more efficient with no loss in accuracy. Beyond prediction quality, the multi-timescale LSTM can process different input variables at different timescales, which is especially relevant to operational applications where the lead time of meteorological forcings depends on their temporal resolution.