1. A Practical Method for Constructing Equivariant Multilayer Perceptrons for Arbitrary Matrix Groups
Marc Finzi, Max Welling, Andrew Gordon Wilson
Symmetry and equivariance are key ingredients to generalization, and are underlying the massively successful CNN, GCNNs, deep sets and
— Marc Finzi (@m_finzi) April 20, 2021
graph networks.
I'm very excited to present our new work EMLP (https://t.co/S3yTgI8u4x) with @wellingmax and @andrewgwils
1/8 pic.twitter.com/DUelwdUDyZ
2. Cetacean Translation Initiative: a roadmap to deciphering the communication of sperm whales
Jacob Andreas, Gašper Beguš, Michael M. Bronstein, Roee Diamant, Denley Delaney, Shane Gero, Shafi Goldwasser, David F. Gruber, Sarah de Haas, Peter Malkin, Roger Payne, Giovanni Petri, Daniela Rus, Pratyusha Sharma, Dan Tchernov, Pernille Tønnesen, Antonio Torralba, Daniel Vogt, Robert J. Wood
- retweets: 5694, favorites: 117 (04/21/2021 12:37:36)
- links: abs | pdf
- cs.SD | cs.AI | cs.CL | cs.LG | cs.RO | eess.AS
Using AI to decipher the clicks of sperm whales: https://t.co/1ePbmtvtOI
— MIT CSAIL (@MIT_CSAIL) April 20, 2021
Paper: https://t.co/TA5cgmnBHi@projectceti w/@Harvard @MIT @CUNY (v/@NatGeo) pic.twitter.com/GRW1ME18Wa
Roboticists, biologists, linguists, and AI experts attempt to decode sperm whale communication. Very excited to be part of this team working on machine learning and linguistics.
— Gasper Begus (@begusgasper) April 20, 2021
A roadmap:https://t.co/gH1YhG3UHD
How does one approach a communication system of a species so pic.twitter.com/WsuQSxdpfp
3. Multi-Modal Fusion Transformer for End-to-End Autonomous Driving
Aditya Prakash, Kashyap Chitta, Andreas Geiger
Multi-Modal Fusion Transformer for End-to-End Autonomous Driving
— AK (@ak92501) April 20, 2021
pdf: https://t.co/PtLIM5WNtD
abs: https://t.co/rXLKnXJxfX
github: https://t.co/c3Kci7meiE pic.twitter.com/3PGuYOuKZH
4. The Power of Scale for Parameter-Efficient Prompt Tuning
Brian Lester, Rami Al-Rfou, Noah Constant
Fine-tuning is dead. Prompts have closed the gap.
— Ethan Caballero (@ethancaballero) April 20, 2021
"The Power of Scale for Parameter-Efficient Prompt Tuning"https://t.co/g5kxMjXs9j pic.twitter.com/pC5OiMuKIG
The Power of Scale for Parameter-Efficient Prompt Tuning
— AK (@ak92501) April 20, 2021
pdf: https://t.co/CLFjJjxyoV
abs: https://t.co/ppMz6DpRgT
"Our end-to-end learned approach outperforms
GPT-3’s “few-shot” learning by a large margin" pic.twitter.com/DDVuRcc2p9
5. Metadata Normalization
Mandy Lu, Qingyu Zhao, Jiequan Zhang, Kilian M. Pohl, Li Fei-Fei, Juan Carlos Niebles, Ehsan Adeli
Check our paper @CVPR 2021: Metadata Normalization (MDN), a new batch-level operation (end2end training) to correct the influence of metadata (#bias, #confounder, you name it) on feature distributions. W/ @drfeifei @jcniebles et al.https://t.co/YT44EhGOl7https://t.co/Pee8UHuKxV pic.twitter.com/hkNbXspA3m
— Ehsan Adeli (@eadeli) April 20, 2021
6. PARE: Part Attention Regressor for 3D Human Body Estimation
Muhammed Kocabas, Chun-Hao P. Huang, Otmar Hilliges, Michael J. Black
PARE: Part Attention Regressor for 3D Human Body Estimation
— AK (@ak92501) April 20, 2021
pdf: https://t.co/VMgM8LOOng
abs: https://t.co/7HlKBrsZZV
project page: https://t.co/lyvHVHyhvY pic.twitter.com/XsNif04iBQ
7. Agent-Centric Representations for Multi-Agent Reinforcement Learning
Wenling Shang, Lasse Espeholt, Anton Raichuk, Tim Salimans
Agent-Centric Representations for Multi-Agent Reinforcement Learning
— AK (@ak92501) April 20, 2021
pdf: https://t.co/e6ZtkmFxNR
abs: https://t.co/gOMyd8kgRB
project page: https://t.co/LDgBW5cfX0 pic.twitter.com/yuZe8Vlt7P
8. The Simpson’s Paradox in the Offline Evaluation of Recommendation Systems
Amir H. Jadidinejad, Craig Macdonald, Iadh Ounis
The preprint of our ACM TOIS journal paper entitled "The Simpson's Paradox in the Offline Evaluation of Recommendation Systems" is now available at: https://t.co/EXQGIgtd41 - joint work with @jadidinejad and @craig_macdonald #recsys
— Iadh Ounis (@iadh) April 20, 2021
9. BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models
Nandan Thakur, Nils Reimers, Andreas Rücklé, Abhishek Srivastava, Iryna Gurevych
🚨New paper alert 🚨
— Nandan Thakur (@Nthakur20) April 20, 2021
🍻 BEIR: a heterogeneous benchmark for IR. 17 datasets, 9 tasks with diverse domains. 9 SOTA retrieval models evaluated in a zero-shot setup.
w/ @Nils_Reimers @arueckle @abhesrivas, IG at @UKPLab
pdf: https://t.co/czg9S9owWm
More details, code 👇#NLProc pic.twitter.com/2vIbGhN6qB
10. StylePeople: A Generative Model of Fullbody Human Avatars
Artur Grigorev, Karim Iskakov, Anastasia Ianina, Renat Bashirov, Ilya Zakharkin, Alexander Vakhitov, Victor Lempitsky
StylePeople: A Generative Model of Fullbody Human Avatars
— AK (@ak92501) April 20, 2021
pdf: https://t.co/aXmFkp2KEe
abs: https://t.co/rJhz0DSUH4 pic.twitter.com/dULT9hVELe
11. Data-Efficient Language-Supervised Zero-Shot Learning with Self-Distillation
Ruizhe Cheng, Bichen Wu, Peizhao Zhang, Peter Vajda, Joseph E. Gonzalez
Data-Efficient Language-Supervised Zero-Shot Learning with Self-Distillation
— AK (@ak92501) April 20, 2021
pdf: https://t.co/CSilCeLyAE
abs: https://t.co/TUADQLD9V1
model achieves strong performance with only 3M image text pairs, 133x smaller than CLIP pic.twitter.com/8Oq6qlR8nD
12. Using Machine Learning at Scale in HPC Simulations with SmartSim: An Application to Ocean Climate Modeling
Sam Partee, Matthew Ellis, Alessandro Rigazzi, Scott Bachman, Gustavo Marques, Andrew Shao, Benjamin Robbins
- retweets: 72, favorites: 16 (04/21/2021 12:37:42)
- links: abs | pdf
- cs.CE | cs.DC | cs.LG | physics.ao-ph
13. Simple Type Theory is not too Simple: Grothendieck’s Schemes without Dependent Types
Anthony Bordg, Lawrence Paulson, Wenda Li
A wonderful quote from https://t.co/SUJhJeC41q: "In formal mathematics, adding an axiom later is easier than removing one!". So few people seem to deeply understand & appreciate this. It's the reason I so rabidly stick to the 'tiny theories' method of building libraries of math.
— Jacques Carette (@jjcarett2) April 20, 2021
14. Temporal Query Networks for Fine-grained Video Understanding
Chuhan Zhang, Ankush Gupta, Andrew Zisserman
Temporal Query Networks for Fine-grained Video Understanding
— AK (@ak92501) April 20, 2021
pdf: https://t.co/3zQc3d6x5J
abs: https://t.co/UiLyRaFVFP
project page: https://t.co/wF6Err20ju pic.twitter.com/7SU7CpVlul
15. FiG-NeRF: Figure-Ground Neural Radiance Fields for 3D Object Category Modelling
Christopher Xie, Keunhong Park, Ricardo Martin-Brualla, Matthew Brown
FiG-NeRF: Figure-Ground Neural Radiance Fields for 3D Object Category Modelling
— AK (@ak92501) April 20, 2021
pdf: https://t.co/oPLjUBskuh
abs: https://t.co/PWo8gSs4Xl pic.twitter.com/8bnwdGqUq0
16. GPT3Mix: Leveraging Large-scale Language Models for Text Augmentation
Kang Min Yoo, Dongju Park, Jaewook Kang, Sang-Woo Lee, Woomyeong Park
GPT3Mix: Leveraging Large-scale Language Models for Text Augmentation
— AK (@ak92501) April 20, 2021
pdf: https://t.co/mFm2P3EFac
abs: https://t.co/PXp3goC7ni pic.twitter.com/bM9GKfVbEt
17. Self-supervised Representation Learning With Path Integral Clustering For Speaker Diarization
Prachi Singh, Sriram Ganapathy
Our work on self-supervised learning for speaker diarization is accepted for IEEE Tran. on Audio Speech and Lang. Proc.
— Sriram Ganapathy (@tweet4sri) April 20, 2021
Self-supervised learning is a branch of unsupervised learning where the data provides supervision labels.https://t.co/NiI7AndDwD#MachineLearning #research pic.twitter.com/P3znKyQHfE
18. CLIPScore: A Reference-free Evaluation Metric for Image Captioning
Jack Hessel, Ari Holtzman, Maxwell Forbes, Ronan Le Bras, Yejin Choi
CLIPScore: A Reference-free Evaluation Metric for Image Captioning
— AK (@ak92501) April 20, 2021
CLIP can be used for robust automatic evaluation of image captioning without the need for references
pdf: https://t.co/dK9IcPHLf8
abs: https://t.co/zlD3NrlcU1 pic.twitter.com/NdkmIDfzQC
19. On the Robustness to Misspecification of -Posteriors and Their Variational Approximations
Marco Avella Medina, José Luis Montiel Olea, Cynthia Rush, Amilcar Velez
A new preprint on variational approximations of α-posteriors by M. Avella Medina, J. L. Montiel Olea, C. Rush & A. Velez.
— Pierre Alquier (@PierreAlquier) April 20, 2021
They prove Bernstein von Mises theorems. The way the asymptotic variance depends on α is very important. See Theorem 3 🤩https://t.co/W9HXuNngwY