1. Tabular Data: Deep Learning is Not All You Need
Ravid Shwartz-Ziv, Amitai Armon
Tabular Data: Deep Learning is Not All You Need. Nice comparison betw XGBoost & recent DNNs for tabular data. Surprise?! XGboost comes out on top for most datasets (esp. those not incl in the DNN papers). What's even better? An ensemble of XGBoost & DNNs https://t.co/GOeH3XxqV4 pic.twitter.com/kCNAJPggon
— Sebastian Raschka (@rasbt) June 8, 2021
2. The Inductive Bias of Quantum Kernels
Jonas M. Kübler, Simon Buchholz, Bernhard Schölkopf
🧐 Can Quantum Machine Learning Models outperform classical ML models?
— Jonas M. Kübler (@jonas_kubler) June 8, 2021
We worked on a few steps towards answering this question: https://t.co/kZiWiywTMM
with Simon Buchholz and @bschoelkopf
a Thread 📜 1/8 pic.twitter.com/Ch7TE9CxEi
3. Meta-Learning with Fewer Tasks through Task Interpolation
Huaxiu Yao, Linjun Zhang, Chelsea Finn
Meta-learning methods need a large set of training tasks. We introduce a simple regularizer that helps, especially when you don’t have a lot of tasks.
— Chelsea Finn (@chelseabfinn) June 8, 2021
Meta-Learning with Fewer Tasks through Task Interpolation
Paper: https://t.co/4xwGEI04eP
with @HuaxiuYaoML, @zlj11112222 pic.twitter.com/qkSOxTGO4k
4. Shape As Points: A Differentiable Poisson Solver
Songyou Peng, Chiyu “Max” Jiang, Yiyi Liao, Michael Niemeyer, Marc Pollefeys, Andreas Geiger
Shape As Points: A Differentiable Poisson Solver
— AK (@ak92501) June 8, 2021
pdf: https://t.co/h8tzSKTViu
abs: https://t.co/e1cMmpwnmZ
shape representation- interpretable, lightweight, yields HQ watertight meshes at much lower inference times compared to neural implicit representations@songyoupeng pic.twitter.com/C94pg3jhxj
5. Motion Planning Transformers: One Model to Plan Them All
Jacob J. Johnson, Linjun Li, Ahmed H. Qureshi, Michael C. Yip
Motion Planning Transformers: One Model to Plan Them All
— AK (@ak92501) June 8, 2021
pdf: https://t.co/5T8NPyhukm
abs: https://t.co/4IQfPiTZan
identifies regions on map using transformers to provide attention to map areas likely to include best path, local planners to generate final collision-free path pic.twitter.com/ZkhS88XWHg
6. Deep Medial Fields
Daniel Rebain, Ke Li, Vincent Sitzmann, Soroosh Yazdani, Kwang Moo Yi, Andrea Tagliasacchi
Deep Medial Fields
— AK (@ak92501) June 8, 2021
pdf: https://t.co/4A0MYSgusu
abs: https://t.co/yUndmxSM1t
an implicit representation of the local thickness, that expands the capacity of implicit representations for 3D geometry pic.twitter.com/sneCJPQnZD
7. 3DB: A Framework for Debugging Computer Vision Models
Guillaume Leclerc, Hadi Salman, Andrew Ilyas, Sai Vemprala, Logan Engstrom, Vibhav Vineet, Kai Xiao, Pengchuan Zhang, Shibani Santurkar, Greg Yang, Ashish Kapoor, Aleksander Madry
Introducing 3DB, a framework for debugging models using 3D rendering. Reproduce your favorite robustness analyses or design your own analyses/experiments in just a few lines of code! (1/3)
— Aleksander Madry (@aleks_madry) June 8, 2021
Paper: https://t.co/lYdjdEAAKS
Code: https://t.co/apgPypgolQ
Blog: https://t.co/69HBeutEt9 pic.twitter.com/L2f2MfMEsI
Check out *3DB*: our new tool for debugging computer vision models via 3D simulation! A year-long effort from our lab @MIT and @MSFTResearch. https://t.co/KoUxwot5ZR
— Hadi Salman (@hadisalmanX) June 8, 2021
We have extensive demos, docs, code and blogpost!https://t.co/DVXUbf7U2J https://t.co/wj26QXNafG
8. Exploring the Limits of Out-of-Distribution Detection
Stanislav Fort, Jie Ren, Balaji Lakshminarayanan
Exploring the Limits of Out-of-Distribution Detection
— AK (@ak92501) June 8, 2021
pdf: https://t.co/9WrKBri4PQ
abs: https://t.co/bZPjf0mckf
fine-tuning large-scale pre-trained transformers and using few-shot outlier exposure can significantly improve the SOTA pic.twitter.com/Ucorh8UazN
9. Shuffle Transformer: Rethinking Spatial Shuffle for Vision Transformer
Zilong Huang, Youcheng Ben, Guozhong Luo, Pei Cheng, Gang Yu, Bin Fu
Shuffle Transformer: Rethinking Spatial Shuffle for
— AK (@ak92501) June 8, 2021
Vision Transformer
pdf: https://t.co/qkqHXKWpLu
abs: https://t.co/VpAssMY0Ec pic.twitter.com/JQYVlUPeoO
10. Learning to Efficiently Sample from Diffusion Probabilistic Models
Daniel Watson, Jonathan Ho, Mohammad Norouzi, William Chan
Learning to Efficiently Sample from Diffusion Probabilistic Models
— Aran Komatsuzaki (@arankomatsuzaki) June 8, 2021
Discovers inference time schedules requiring as few as 32 refinement steps, while sacrificing less than 0.1 bits per dimension compared to the default 4,000 steps used on ImageNet 64x64.https://t.co/cPjuCNKqh8 pic.twitter.com/QB2S1HKMst
Learning to Efficiently Sample from Diffusion Probabilistic Models
— AK (@ak92501) June 8, 2021
pdf: https://t.co/FiucHSsuLR
abs: https://t.co/KvXqT2OmRp
a novel and efficient dynamic programming algorithm to discover the optimal inference schedule for a pre-trained DDPM pic.twitter.com/acIRp5v5j3
11. A Variational Perspective on Diffusion-Based Generative Models and Score Matching
Chin-Wei Huang, Jae Hyun Lim, Aaron Courville
Super excited to share our new theory paper on connecting diffusion models & score matching from a variational perspective (i.e. likelihood training)!https://t.co/M7rcU0dieI
— Chin-Wei Huang (@chinwei_h) June 8, 2021
We derive a new ELBO for general continuous-time diffusion models.
w/ @jaehyunlim0606 & @AaronCourville pic.twitter.com/NLynQX38wk
12. Self-Damaging Contrastive Learning
Ziyu Jiang, Tianlong Chen, Bobak Mortazavi, Zhangyang Wang
Self-Damaging Contrastive Learning
— AK (@ak92501) June 8, 2021
pdf: https://t.co/wHoD6UJVoT
abs: https://t.co/cBy0btgmR0
github: https://t.co/AzLq4utoiX pic.twitter.com/DnutaOKrje
13. BayesIMP: Uncertainty Quantification for Causal Data Fusion
Siu Lun Chau, Jean-François Ton, Javier González, Yee Whye Teh, Dino Sejdinovic
Interested in Kernel Methods, Causal Inference and Uncertainty quantification?
— Jean-François Ton (@jeanfrancois287) June 8, 2021
In our newest work we introduce BayesIMP: Uncertainty Quantification for Causal Data Fusion !https://t.co/trn8cfk8dM
Big thanks to @Chau9991 @javiergonzh @yeewhye @sejDino 1/n pic.twitter.com/34TiJvMKeG
14. Multi-chart flows
Dimitris Kalatzis, Johan Ziruo Ye, Jesper Wohlert, Søren Hauberg
New preprint on normalizing flows! https://t.co/TImKXBP88v
— Dimitris Kalatzis (@__DiracDelta) June 8, 2021
Question: can you use normalizing flows to learn a density on a smooth manifold along with the manifold structure?
TL;DR: You can, if you know how to use them ;)
A thread.
15. RegionViT: Regional-to-Local Attention for Vision Transformers
Chun-Fu Chen, Rameswar Panda, Quanfu Fan
RegionViT: Regional-to-Local Attention for Vision Transformers
— AK (@ak92501) June 8, 2021
pdf: https://t.co/fEWYlYo8QI
abs: https://t.co/d9Zm3DIZei
architecture that adopts the pyramid structure and employ a novel regional-to-local attention rather than global self-attention in vision transformers pic.twitter.com/rIa607LudC
16. Photonic Differential Privacy with Direct Feedback Alignment
Ruben Ohana, Hamlet J. Medina Ruiz, Julien Launay, Alessandro Cappelli, Iacopo Poli, Liva Ralaivola, Alain Rakotomamonjy
💥Differential privacy with @LightOnIO OPUs? Yes, you can!
— LightOn (@LightOnIO) June 8, 2021
In collaboration with @CriteoAILab here is "Photonic Differential Privacy with Direct Feedback Alignment" by @oharub Hamlet Ruiz @slippylolo @achapeau1 @iacopo_poli @LivaRalaivola @rakotal1 https://t.co/BV1GGyh7jG pic.twitter.com/t8OEj6LKxX
17. Meta-research on COVID-19: An overview of the early trends
Giovanni Colavizza
New pre-print out: “Meta-research on COVID-19: An overview of the early trends” https://t.co/67H2493My2
— Giovanni Colavizza (@giovanni1085) June 8, 2021
I review science studies, scientometrics and related meta-research work on COVID-19’s impact on research and researchers, and their responses.
Main findings follow 👇
18. Uformer: A General U-Shaped Transformer for Image Restoration
Zhendong Wang, Xiaodong Cun, Jianmin Bao, Jianzhuang Liu
Uformer: A General U-Shaped Transformer for Image Restoration
— AK (@ak92501) June 8, 2021
pdf: https://t.co/VHxgIBI0yr
abs: https://t.co/xqHomTEIzQ
achieves sota performance on several tasks, including denoising, deraining, deblurring, and demoireing pic.twitter.com/1JC2rSIMsx
19. Control-Oriented Model-Based Reinforcement Learning with Implicit Differentiation
Evgenii Nikishin, Romina Abachi, Rishabh Agarwal, Pierre-Luc Bacon
We present Optimal Model Design (OMD) — a model-based RL algorithm that trains the model to **directly** optimize the sum of rewards instead of proxies to the agent’s goal (e.g. likelihood p(s’, r | s, a)).https://t.co/RfMw9n6FiC
— Evgenii Nikishin (@nikishin_evg) June 8, 2021
With @rom72aba, @agarwl_, @pierrelux
1/9 🧵
Control-Oriented Model-Based Reinforcement Learning with Implicit Differentiation
— AK (@ak92501) June 8, 2021
pdf: https://t.co/bgwUFDdsyN
abs: https://t.co/8XAH6z0I9k
a method for learning control-oriented models that
addresses the shortcomings of likelihood-based MBRL approaches pic.twitter.com/awLkPwHUic
20. Refiner: Refining Self-attention for Vision Transformers
Daquan Zhou, Yujun Shi, Bingyi Kang, Weihao Yu, Zihang Jiang, Yuan Li, Xiaojie Jin, Qibin Hou, Jiashi Feng
Refiner: Refining Self-attention for Vision Transformers
— AK (@ak92501) June 8, 2021
pdf: https://t.co/IgvB5eeSTQ
abs: https://t.co/olYwqKm0Sp
augments the self-attention of ViTs by attention expansion and distributed local attention pic.twitter.com/D4EbMsDutd
21. Itihasa: A large-scale corpus for Sanskrit to English translation
Rahul Aralikatte, Miryam de Lhoneux, Anoop Kunchukuttan, Anders Søgaard
Announcing Itihasa, a large-scale Sanskrit-English translation corpus. This work is very close to my heart and I have been working on it for more than a year now. https://t.co/QHvpuSJreT (1/)
— Rahul (@rahul_a_r) June 8, 2021
22. Meta-StyleSpeech : Multi-Speaker Adaptive Text-to-Speech Generation
Dongchan Min, Dong Bok Lee, Eunho Yang, Sung Ju Hwang
Meta-StyleSpeech : Multi-Speaker Adaptive Text-to-Speech Generation
— AK (@ak92501) June 8, 2021
pdf: https://t.co/dyYv0OzXSG
abs: https://t.co/mqKD7pyifb
project page: https://t.co/sBMKfftK2L pic.twitter.com/tj9hBf7Rh7
23. SIMONe: View-Invariant, Temporally-Abstracted Object Representations via Unsupervised Video Decomposition
Rishabh Kabra, Daniel Zoran, Goker Erdogan, Loic Matthey, Antonia Creswell, Matthew Botvinick, Alexander Lerchner, Christopher P. Burgess
SIMONe: View-Invariant, Temporally-Abstracted
— AK (@ak92501) June 8, 2021
Object Representations via Unsupervised Video Decomposition
pdf: https://t.co/Tnjm7UMynw
abs: https://t.co/960ki3Z7a4
project page: https://t.co/A5TxvNM8gM pic.twitter.com/KuwamRedm9
24. RedditBias: A Real-World Resource for Bias Evaluation and Debiasing of Conversational Language Models
Soumya Barikeri, Anne Lauscher, Ivan Vulić, Goran Glavaš
The RedditBias paper is available now: we describe a conversational data set grounded in actual human conversations from Reddit and couple bias evaluation with model capability evaluation in dialog tasks after model debiasing.🤖 @gg42554 @licwu @dwsunima https://t.co/31a0Z5W8Yy
— Anne Lauscher (@anne_lauscher) June 8, 2021
25. Recovery Analysis for Plug-and-Play Priors using the Restricted Eigenvalue Condition
Jiaming Liu, M. Salman Asif, Brendt Wohlberg, Ulugbek S. Kamilov
- retweets: 48, favorites: 31 (06/09/2021 10:51:46)
- links: abs | pdf
- cs.CV | cs.LG | eess.IV | eess.SP
"Recovery Analysis for Plug-and-Play Priors using the Restricted Eigenvalue Condition" is out on arXiv.
— Ulugbek S. Kamilov (@ukmlv) June 8, 2021
Read it here: https://t.co/NiP0jt49HV.
We don't propose any new algorithms, so what is the goal? pic.twitter.com/SqHbNCFDxB
26. Learnable Fourier Features for Multi-DimensionalSpatial Positional Encoding
Yang Li, Si Si, Gang Li, Cho-Jui Hsieh, Samy Bengio
Learnable Fourier Features for Multi-Dimensional Spatial Positional Encoding
— AK (@ak92501) June 8, 2021
pdf: https://t.co/H3RKW8P6gE
abs: https://t.co/auUmvMqqMd
a novel positional encoding method based on learnable Fourier features pic.twitter.com/eE5d5lRlja
27. Lawvere-Tierney topologies for computability theorists
Takayuki Kihara
28. Hierarchical Video Generation for Complex Data
Lluis Castrejon, Nicolas Ballas, Aaron Courville
Hierarchical Video Generation for Complex Data
— AK (@ak92501) June 8, 2021
pdf: https://t.co/zKMauHlSMO
abs: https://t.co/DE4xHaOqgg
model generates a low resolution video, establishing the
global scene structure, that is then refined by subsequent levels in the hierarchy pic.twitter.com/KiyY06ioHa