1. Just Train Twice: Improving Group Robustness without Training Group Information
Evan Zheran Liu, Behzad Haghgoo, Annie S. Chen, Aditi Raghunathan, Pang Wei Koh, Shiori Sagawa, Percy Liang, Chelsea Finn
- retweets: 5544, favorites: 350 (07/21/2021 10:09:31)
- links: abs | pdf
- cs.LG | cs.AI | cs.CY | stat.ML
Standard training via empirical risk minimization (ERM) can produce models that achieve high accuracy on average but low accuracy on certain groups, especially in the presence of spurious correlations between the input and label. Prior approaches that achieve high worst-group accuracy, like group distributionally robust optimization (group DRO) require expensive group annotations for each training point, whereas approaches that do not use such group annotations typically achieve unsatisfactory worst-group accuracy. In this paper, we propose a simple two-stage approach, JTT, that first trains a standard ERM model for several epochs, and then trains a second model that upweights the training examples that the first model misclassified. Intuitively, this upweights examples from groups on which standard ERM models perform poorly, leading to improved worst-group performance. Averaged over four image classification and natural language processing tasks with spurious correlations, JTT closes 75% of the gap in worst-group accuracy between standard ERM and group DRO, while only requiring group annotations on a small validation set in order to tune hyperparameters.
Does your neural network struggle with spurious correlations?
— Chelsea Finn (@chelseabfinn) July 20, 2021
Check out Evan’s long talk at #ICML2021 on why they should just train twice (JTT).
Paper: https://t.co/MBrgmvyqLB
Talk: https://t.co/Xr3q0oZlR2
Code: https://t.co/HhPqbhXKMh pic.twitter.com/sYrh7SwFNG
2. YOLOX: Exceeding YOLO Series in 2021
Zheng Ge, Songtao Liu, Feng Wang, Zeming Li, Jian Sun
In this report, we present some experienced improvements to YOLO series, forming a new high-performance detector — YOLOX. We switch the YOLO detector to an anchor-free manner and conduct other advanced detection techniques, i.e., a decoupled head and the leading label assignment strategy SimOTA to achieve state-of-the-art results across a large scale range of models: For YOLO-Nano with only 0.91M parameters and 1.08G FLOPs, we get 25.3% AP on COCO, surpassing NanoDet by 1.8% AP; for YOLOv3, one of the most widely used detectors in industry, we boost it to 47.3% AP on COCO, outperforming the current best practice by 3.0% AP; for YOLOX-L with roughly the same amount of parameters as YOLOv4-CSP, YOLOv5-L, we achieve 50.0% AP on COCO at a speed of 68.9 FPS on Tesla V100, exceeding YOLOv5-L by 1.8% AP. Further, we won the 1st Place on Streaming Perception Challenge (Workshop on Autonomous Driving at CVPR 2021) using a single YOLOX-L model. We hope this report can provide useful experience for developers and researchers in practical scenes, and we also provide deploy versions with ONNX, TensorRT, NCNN, and Openvino supported. Source code is at https://github.com/Megvii-BaseDetection/YOLOX.
YOLOX: Exceeding YOLO Series in 2021
— AK (@ak92501) July 20, 2021
pdf: https://t.co/xC1ZEPOLRW
abs: https://t.co/BNkflEgqaC
github: https://t.co/rym6pRl10e pic.twitter.com/7Gg3ov9SUN
3. Epistemic Neural Networks
Ian Osband, Zheng Wen, Mohammad Asghari, Morteza Ibrahimi, Xiyuan Lu, Benjamin Van Roy
We introduce the \textit{epistemic neural network} (ENN) as an interface for uncertainty modeling in deep learning. All existing approaches to uncertainty modeling can be expressed as ENNs, and any ENN can be identified with a Bayesian neural network. However, this new perspective provides several promising directions for future research. Where prior work has developed probabilistic inference tools for neural networks; we ask instead, `which neural networks are suitable as tools for probabilistic inference?‘. We propose a clear and simple metric for progress in ENNs: the KL-divergence with respect to a target distribution. We develop a computational testbed based on inference in a neural network Gaussian process and release our code as a benchmark at \url{https://github.com/deepmind/enn}. We evaluate several canonical approaches to uncertainty modeling in deep learning, and find they vary greatly in their performance. We provide insight to the sensitivity of these results and show that our metric is highly correlated with performance in sequential decision problems. Finally, we provide indications that new ENN architectures can improve performance in both the statistical quality and computational cost.
Epistemic Neural Networks
— AK (@ak92501) July 20, 2021
pdf: https://t.co/rVU20iQMbb
abs: https://t.co/t9vAPk83FZ
introduce the epistemic neural network (ENN) as an interface for uncertainty modeling in deep learning pic.twitter.com/A4VP0VvfFW
4. Reasoning-Modulated Representations
Petar Veličković, Matko Bošnjak, Thomas Kipf, Alexander Lerchner, Raia Hadsell, Razvan Pascanu, Charles Blundell
Neural networks leverage robust internal representations in order to generalise. Learning them is difficult, and often requires a large training set that covers the data distribution densely. We study a common setting where our task is not purely opaque. Indeed, very often we may have access to information about the underlying system (e.g. that observations must obey certain laws of physics) that any “tabula rasa” neural network would need to re-learn from scratch, penalising data efficiency. We incorporate this information into a pre-trained reasoning module, and investigate its role in shaping the discovered representations in diverse self-supervised learning settings from pixels. Our approach paves the way for a new class of data-efficient representation learning.
Delighted to share our work on reasoning-modulated representations! Contributed talk at @icmlconf SSL Workshop 🎉https://t.co/5iTLOx0KpC
— Petar Veličković (@PetarV_93) July 20, 2021
Algo reasoning can help representation learning! See thread👇🧵
w/ Matko @thomaskipf @AlexLerchner @RaiaHadsell @rpascanu @BlundellCharles pic.twitter.com/W7qtNuAFyt
5. EvilModel: Hiding Malware Inside of Neural Network Models
Zhi Wang, Chaoge Liu, Xiang Cui
Delivering malware covertly and detection-evadingly is critical to advanced malware campaigns. In this paper, we present a method that delivers malware covertly and detection-evadingly through neural network models. Neural network models are poorly explainable and have a good generalization ability. By embedding malware into the neurons, malware can be delivered covertly with minor or even no impact on the performance of neural networks. Meanwhile, since the structure of the neural network models remains unchanged, they can pass the security scan of antivirus engines. Experiments show that 36.9MB of malware can be embedded into a 178MB-AlexNet model within 1% accuracy loss, and no suspicious are raised by antivirus engines in VirusTotal, which verifies the feasibility of this method. With the widespread application of artificial intelligence, utilizing neural networks becomes a forwarding trend of malware. We hope this work could provide a referenceable scenario for the defense on neural network-assisted attacks.
GROAN! EvilModel: Hiding Malware Inside of Neural Network Models: https://t.co/vIBtOu5Sq1
— Charlie Stross (@cstross) July 20, 2021
(Caveat: pre-print, unreviewed. Not obviously implausible, though, and utterly horrible security implications if substantiated.) pic.twitter.com/QOEcrBKBpN
OMG LOL.
— Incredible Good Fun Frances Dances (@datakid23) July 20, 2021
EvilModel: Hiding Malware Inside of Neural Network Modelshttps://t.co/bFYLQfuMr5@kcarruthers @bruces
6. Equivariant Manifold Flows
Isay Katsman, Aaron Lou, Derek Lim, Qingxuan Jiang, Ser-Nam Lim, Christopher De Sa
Tractably modelling distributions over manifolds has long been an important goal in the natural sciences. Recent work has focused on developing general machine learning models to learn such distributions. However, for many applications these distributions must respect manifold symmetries — a trait which most previous models disregard. In this paper, we lay the theoretical foundations for learning symmetry-invariant distributions on arbitrary manifolds via equivariant manifold flows. We demonstrate the utility of our approach by using it to learn gauge invariant densities over in the context of quantum field theory.
I am happy to present our new work, “Equivariant Manifold Flows”, together with @aaron_lou, @dereklim_lzh, Qingxuan Jiang, @sernamlim, @chrismdesa!
— Isay Katsman (@isaykatsman) July 20, 2021
Arxiv: https://t.co/S1GkKikgcz pic.twitter.com/GsQRAgYzyb
7. Translatotron 2: Robust direct speech-to-speech translation
Ye Jia, Michelle Tadmor Ramanovich, Tal Remez, Roi Pomerantz
We present Translatotron 2, a neural direct speech-to-speech translation model that can be trained end-to-end. Translatotron 2 consists of a speech encoder, a phoneme decoder, a mel-spectrogram synthesizer, and an attention module that connects all the previous three components. Experimental results suggest that Translatotron 2 outperforms the original Translatotron by a large margin in terms of translation quality and predicted speech naturalness, and drastically improves the robustness of the predicted speech by mitigating over-generation, such as babbling or long pause. We also propose a new method for retaining the source speaker’s voice in the translated speech. The trained model is restricted to retain the source speaker’s voice, and unlike the original Translatotron, it is not able to generate speech in a different speaker’s voice, making the model more robust for production deployment, by mitigating potential misuse for creating spoofing audio artifacts. When the new method is used together with a simple concatenation-based data augmentation, the trained Translatotron 2 model is able to retain each speaker’s voice for input with speaker turns.
Translatotron 2: Robust direct speech-to-speech translation
— AK (@ak92501) July 20, 2021
pdf: https://t.co/9IPIWOwWac
samples: https://t.co/TEXw3z59O2
outperforms Translatotron by a large margin in terms of translation quality and predicted speech naturalness pic.twitter.com/dQ97yE9iow
8. Autonomy 2.0: Why is self-driving always 5 years away?
Ashesh Jain, Luca Del Pero, Hugo Grimmett, Peter Ondruska
Despite the numerous successes of machine learning over the past decade (image recognition, decision-making, NLP, image synthesis), self-driving technology has not yet followed the same trend. In this paper, we study the history, composition, and development bottlenecks of the modern self-driving stack. We argue that the slow progress is caused by approaches that require too much hand-engineering, an over-reliance on road testing, and high fleet deployment costs. We observe that the classical stack has several bottlenecks that preclude the necessary scale needed to capture the long tail of rare events. To resolve these problems, we outline the principles of Autonomy 2.0, an ML-first approach to self-driving, as a viable alternative to the currently adopted state-of-the-art. This approach is based on (i) a fully differentiable AV stack trainable from human demonstrations, (ii) closed-loop data-driven reactive simulation, and (iii) large-scale, low-cost data collections as critical solutions towards scalability issues. We outline the general architecture, survey promising works in this direction and propose key challenges to be addressed by the community in the future.
Autonomy 2.0: Why is self-driving always 5 years away?
— AK (@ak92501) July 20, 2021
pdf: https://t.co/z3QOYvPAC3
abs: https://t.co/pHWTgqjQU1
outlines the Autonomy 2.0 paradigm, which is designed to solve self-driving using an ML-first approach pic.twitter.com/KR7BwT3TRh
9. CodeMapping: Real-Time Dense Mapping for Sparse SLAM using Compact Scene Representations
Hidenobu Matsuki, Raluca Scona, Jan Czarnowski, Andrew J. Davison
We propose a novel dense mapping framework for sparse visual SLAM systems which leverages a compact scene representation. State-of-the-art sparse visual SLAM systems provide accurate and reliable estimates of the camera trajectory and locations of landmarks. While these sparse maps are useful for localization, they cannot be used for other tasks such as obstacle avoidance or scene understanding. In this paper we propose a dense mapping framework to complement sparse visual SLAM systems which takes as input the camera poses, keyframes and sparse points produced by the SLAM system and predicts a dense depth image for every keyframe. We build on CodeSLAM and use a variational autoencoder (VAE) which is conditioned on intensity, sparse depth and reprojection error images from sparse SLAM to predict an uncertainty-aware dense depth map. The use of a VAE then enables us to refine the dense depth images through multi-view optimization which improves the consistency of overlapping frames. Our mapper runs in a separate thread in parallel to the SLAM system in a loosely coupled manner. This flexible design allows for integration with arbitrary metric sparse SLAM systems without delaying the main SLAM process. Our dense mapper can be used not only for local mapping but also globally consistent dense 3D reconstruction through TSDF fusion. We demonstrate our system running with ORB-SLAM3 and show accurate dense depth estimation which could enable applications such as robotics and augmented reality.
My first work at Imperial is accepted to IEEE Robotics and Automation Letters!
— Hide (@HideMatsu82) July 20, 2021
We propose CodeMapping, a real-time and code-based dense mapper for sparse vSLAM.
Huge thanks to co-authors @RalucaScona @czarnowskij @AjdDavison!https://t.co/bFLir4nKmLhttps://t.co/Q6DML69zxS pic.twitter.com/NHNZKxR0Xv
10. Megaverse: Simulating Embodied Agents at One Million Experiences per Second
Aleksei Petrenko, Erik Wijmans, Brennan Shacklett, Vladlen Koltun
We present Megaverse, a new 3D simulation platform for reinforcement learning and embodied AI research. The efficient design of our engine enables physics-based simulation with high-dimensional egocentric observations at more than 1,000,000 actions per second on a single 8-GPU node. Megaverse is up to 70x faster than DeepMind Lab in fully-shaded 3D scenes with interactive objects. We achieve this high simulation performance by leveraging batched simulation, thereby taking full advantage of the massive parallelism of modern GPUs. We use Megaverse to build a new benchmark that consists of several single-agent and multi-agent tasks covering a variety of cognitive challenges. We evaluate model-free RL on this benchmark to provide baselines and facilitate future research. The source code is available at https://www.megaverse.info
Megaverse: Simulating Embodied Agents at One Million Experiences per Second
— AK (@ak92501) July 20, 2021
pdf: https://t.co/IMTBxLsXKZ
abs: https://t.co/LZws2Eg7gl
project page: https://t.co/S6NmtU2poc
github: https://t.co/OqNTANBSfI pic.twitter.com/wWz3s0uprs