1. A Bayesian neural network predicts the dissolution of compact planetary systems
Miles Cranmer, Daniel Tamayo, Hanno Rein, Peter Battaglia, Samuel Hadden, Philip J. Armitage, Shirley Ho, David N. Spergel
- retweets: 1406, favorites: 212 (01/14/2021 10:53:51)
- links: abs | pdf
- astro-ph.EP | astro-ph.IM | cs.AI | cs.LG | stat.ML
Despite over three hundred years of effort, no solutions exist for predicting when a general planetary configuration will become unstable. We introduce a deep learning architecture to push forward this problem for compact systems. While current machine learning algorithms in this area rely on scientist-derived instability metrics, our new technique learns its own metrics from scratch, enabled by a novel internal structure inspired from dynamics theory. Our Bayesian neural network model can accurately predict not only if, but also when a compact planetary system with three or more planets will go unstable. Our model, trained directly from short N-body time series of raw orbital elements, is more than two orders of magnitude more accurate at predicting instability times than analytical estimators, while also reducing the bias of existing machine learning algorithms by nearly a factor of three. Despite being trained on compact resonant and near-resonant three-planet configurations, the model demonstrates robust generalization to both non-resonant and higher multiplicity configurations, in the latter case outperforming models fit to that specific set of integrations. The model computes instability estimates up to five orders of magnitude faster than a numerical integrator, and unlike previous efforts provides confidence intervals on its predictions. Our inference model is publicly available in the SPOCK package, with training code open-sourced.
Very excited to present our new work: we adapt Bayesian neural networks to predict the dissolution of compact planetary systems, a variant of the three-body problem!
— Miles Cranmer (@MilesCranmer) January 13, 2021
Blogpost/code: https://t.co/sNKv1Xduff
Paper: https://t.co/bNsN8VqULq
API: https://t.co/wPNMmTOOiq
Thread: 👇 pic.twitter.com/2HG75x3vcU
2. Benchmarking Simulation-Based Inference
Jan-Matthis Lueckmann, Jan Boelts, David S. Greenberg, Pedro J. Gonçalves, Jakob H. Macke
Recent advances in probabilistic modelling have led to a large number of simulation-based inference algorithms which do not require numerical evaluation of likelihoods. However, a public benchmark with appropriate performance metrics for such ‘likelihood-free’ algorithms has been lacking. This has made it difficult to compare algorithms and identify their strengths and weaknesses. We set out to fill this gap: We provide a benchmark with inference tasks and suitable performance metrics, with an initial selection of algorithms including recent approaches employing neural networks and classical Approximate Bayesian Computation methods. We found that the choice of performance metric is critical, that even state-of-the-art algorithms have substantial room for improvement, and that sequential estimation improves sample efficiency. Neural network-based approaches generally exhibit better performance, but there is no uniformly best algorithm. We provide practical advice and highlight the potential of the benchmark to diagnose problems and improve algorithms. The results can be explored interactively on a companion website. All code is open source, making it possible to contribute further benchmark tasks and inference algorithms.
Excited to share our new paper on benchmarking simulation-based inference!
— jan-matthis (@janmatthis) January 13, 2021
Check out our interactive website w/short summary
With @janfiete, @dvdgbg, @ppjgoncalves and @jakhmack
Paper: https://t.co/bClvCjPhOG
Code: https://t.co/61bKxN9u78https://t.co/mEjQOUD7WA
3. From Tinkering to Engineering: Measurements in Tensorflow Playground
Henrik Hoeiness, Axel Harstad, Gerald Friedland
In this article, we present an extension of the Tensorflow Playground, called Tensorflow Meter (short TFMeter). TFMeter is an interactive neural network architecting tool that allows the visual creation of different architectures of neural networks. In addition to its ancestor, the playground, our tool shows information-theoretic measurements while constructing, training, and testing the network. As a result, each change results in a change in at least one of the measurements, providing for a better engineering intuition of what different architectures are able to learn. The measurements are derived from various places in the literature. In this demo, we describe our web application that is available online at http://tfmeter.icsi.berkeley.edu/ and argue that in the same way that the original Playground is meant to build an intuition about neural networks, our extension educates users on available measurements, which we hope will ultimately improve experimental design and reproducibility in the field.
From Tinkering to Engineering: Measurements in Tensorflow Playground
— AK (@ak92501) January 13, 2021
pdf: https://t.co/Ine0ZOcpKX
abs: https://t.co/uC6CTcJRqW
web demo: https://t.co/2sGZsi0nVe pic.twitter.com/Hfqphn6rwD
4. Mixup Without Hesitation
Hao Yu, Huanyu Wang, Jianxin Wu
Mixup linearly interpolates pairs of examples to form new samples, which is easy to implement and has been shown to be effective in image classification tasks. However, there are two drawbacks in mixup: one is that more training epochs are needed to obtain a well-trained model; the other is that mixup requires tuning a hyper-parameter to gain appropriate capacity but that is a difficult task. In this paper, we find that mixup constantly explores the representation space, and inspired by the exploration-exploitation dilemma in reinforcement learning, we propose mixup Without hesitation (mWh), a concise, effective, and easy-to-use training algorithm. We show that mWh strikes a good balance between exploration and exploitation by gradually replacing mixup with basic data augmentation. It can achieve a strong baseline with less training time than original mixup and without searching for optimal hyper-parameter, i.e., mWh acts as mixup without hesitation. mWh can also transfer to CutMix, and gain consistent improvement on other machine learning and computer vision tasks such as object detection. Our code is open-source and available at https://github.com/yuhao318/mwh
Mixup Without Hesitationhttps://t.co/QXSgMvHeDRhttps://t.co/pvYqVSYJTi pic.twitter.com/6k637FD7Kg
— phalanx (@ZFPhalanx) January 13, 2021
5. Interpretable discovery of new semiconductors with machine learning
Hitarth Choubisa, Petar Todorović, Joao M. Pina, Darshan H. Parmar, Ziliang Li, Oleksandr Voznyy, Isaac Tamblyn, Edward Sargent
- retweets: 30, favorites: 52 (01/14/2021 10:53:51)
- links: abs | pdf
- cond-mat.mtrl-sci | cs.LG
Machine learning models of materials accelerate discovery compared to ab initio methods: deep learning models now reproduce density functional theory (DFT)-calculated results at one hundred thousandths of the cost of DFT. To provide guidance in experimental materials synthesis, these need to be coupled with an accurate yet effective search algorithm and training data consistent with experimental observations. Here we report an evolutionary algorithm powered search which uses machine-learned surrogate models trained on high-throughput hybrid functional DFT data benchmarked against experimental bandgaps: Deep Adaptive Regressive Weighted Intelligent Network (DARWIN). The strategy enables efficient search over the materials space of ~10 ternaries and 10 quaternaries for candidates with target properties. It provides interpretable design rules, such as our finding that the difference in the electronegativity between the halide and B-site cation being a strong predictor of ternary structural stability. As an example, when we seek UV emission, DARWIN predicts KCuX (X = Cl, Br) as a promising materials family, based on its electronegativity difference. We synthesized and found these materials to be stable, direct bandgap UV emitters. The approach also allows knowledge distillation for use by humans.
Happy to introduce DARWINhttps://t.co/H1jKtkZksM
— Isaac Tamblyn (@itamblyn) January 13, 2021
Using evolutionary search, we provide an interpretable and accurate approach to designing new materials
Hitarth Choubisa & Petar Todorović did great work building this Deep Adaptive Regressive Weighted Intelligent Network
6. Categories of Nets
John C. Baez, Fabrizio Genovese, Jade Master, Michael Shulman
We present a unified framework for Petri nets and various variants, such as pre-nets and Kock’s whole-grain Petri nets. Our framework is based on a less well-studied notion that we call -nets, which allow finer control over whether tokens are treated using the collective or individual token philosophy. We describe three forms of execution semantics in which pre-nets generate strict monoidal categories, -nets (including whole-grain Petri nets) generate symmetric strict monoidal categories, and Petri nets generate commutative monoidal categories, all by left adjoint functors. We also construct adjunctions relating these categories of nets to each other, in particular showing that all kinds of net can be embedded in the unifying category of -nets, in a way that commutes coherently with their execution semantics.
John C. Baez, Fabrizio Genovese, Jade Master, Michael Shulman: Categories of Nets https://t.co/zidrOjDdIC https://t.co/Gu8ei4d68O
— arXiv math.CT Category Theory (@mathCTbot) January 13, 2021
7. Superpixel-based Refinement for Object Proposal Generation
Christian Wilms, Simone Frintrop
Precise segmentation of objects is an important problem in tasks like class-agnostic object proposal generation or instance segmentation. Deep learning-based systems usually generate segmentations of objects based on coarse feature maps, due to the inherent downsampling in CNNs. This leads to segmentation boundaries not adhering well to the object boundaries in the image. To tackle this problem, we introduce a new superpixel-based refinement approach on top of the state-of-the-art object proposal system AttentionMask. The refinement utilizes superpixel pooling for feature extraction and a novel superpixel classifier to determine if a high precision superpixel belongs to an object or not. Our experiments show an improvement of up to 26.0% in terms of average recall compared to original AttentionMask. Furthermore, qualitative and quantitative analyses of the segmentations reveal significant improvements in terms of boundary adherence for the proposed refinement compared to various deep learning-based state-of-the-art object proposal generation systems.
Superpixel-based Refinement for Object Proposal Generation
— AK (@ak92501) January 13, 2021
pdf: https://t.co/JOSl6GUStC
abs: https://t.co/gSLWQalmxM
github: https://t.co/BLw6qkwUCC pic.twitter.com/liLog6SUEU