1. GANcraft: Unsupervised 3D Neural Rendering of Minecraft Worlds
Zekun Hao, Arun Mallya, Serge Belongie, Ming-Yu Liu
We present GANcraft, an unsupervised neural rendering framework for generating photorealistic images of large 3D block worlds such as those created in Minecraft. Our method takes a semantic block world as input, where each block is assigned a semantic label such as dirt, grass, or water. We represent the world as a continuous volumetric function and train our model to render view-consistent photorealistic images for a user-controlled camera. In the absence of paired ground truth real images for the block world, we devise a training technique based on pseudo-ground truth and adversarial training. This stands in contrast to prior work on neural rendering for view synthesis, which requires ground truth images to estimate scene geometry and view-dependent appearance. In addition to camera trajectory, GANcraft allows user control over both scene semantics and output style. Experimental results with comparison to strong baselines show the effectiveness of GANcraft on this novel task of photorealistic 3D block world synthesis. The project website is available at https://nvlabs.github.io/GANcraft/ .
Introducing GANcraft, a method to convert user-created semantic 3D block worlds, like those from Minecraft, to realistic-looking worlds, without paired training data!
— Arun Mallya (@arunmallya) April 16, 2021
arxiv: https://t.co/TR9rpqeEI2
webpage: https://t.co/u2xrJdKEME
by @zekunhao19951, @SergeBelongie, @liu_mingyu pic.twitter.com/I0V0NP5PsI
2. Neural population geometry: An approach for understanding biological and artificial neural networks
SueYeon Chung, L. F. Abbott
Advances in experimental neuroscience have transformed our ability to explore the structure and function of neural circuits. At the same time, advances in machine learning have unleashed the remarkable computational power of artificial neural networks (ANNs). While these two fields have different tools and applications, they present a similar challenge: namely, understanding how information is embedded and processed through high-dimensional representations to solve complex tasks. One approach to addressing this challenge is to utilize mathematical and computational tools to analyze the geometry of these high-dimensional representations, i.e., neural population geometry. We review examples of geometrical approaches providing insight into the function of biological and artificial neural networks: representation untangling in perception, a geometric theory of classification capacity, disentanglement and abstraction in cognitive systems, topological representations underlying cognitive maps, dynamic untangling in motor systems, and a dynamical approach to cognition. Together, these findings illustrate an exciting trend at the intersection of machine learning, neuroscience, and geometry, in which neural population geometry provides a useful population-level mechanistic descriptor underlying task implementation. Importantly, geometric descriptions are applicable across sensory modalities, brain regions, network architectures and timescales. Thus, neural population geometry has the potential to unify our understanding of structure and function in biological and artificial neural networks, bridging the gap between single neurons, populations and behavior.
Speaking of opinionated reviews...
— SueYeon Chung (@s_y_chung) April 16, 2021
a new review with LF Abbott on #NeuralManifolds:
"Neural population geometry: An approach for understanding biological and artificial neural networks" https://t.co/Rf2GQX2YBf
3. Deep Learning-based Online Alternative Product Recommendations at Scale
Mingming Guo, Nian Yan, Xiquan Cui, San He Wu, Unaiza Ahsan, Rebecca West, Khalifeh Al Jadda
Alternative recommender systems are critical for ecommerce companies. They guide customers to explore a massive product catalog and assist customers to find the right products among an overwhelming number of options. However, it is a non-trivial task to recommend alternative products that fit customer needs. In this paper, we use both textual product information (e.g. product titles and descriptions) and customer behavior data to recommend alternative products. Our results show that the coverage of alternative products is significantly improved in offline evaluations as well as recall and precision. The final A/B test shows that our algorithm increases the conversion rate by 12 percent in a statistically significant way. In order to better capture the semantic meaning of product information, we build a Siamese Network with Bidirectional LSTM to learn product embeddings. In order to learn a similarity space that better matches the preference of real customers, we use co-compared data from historical customer behavior as labels to train the network. In addition, we use NMSLIB to accelerate the computationally expensive kNN computation for millions of products so that the alternative recommendation is able to scale across the entire catalog of a major ecommerce site.
We've reached the stage of AI ubiquity where I'm just like "cool, makes sense" when seeing a deep learning paper published by researchers at Home Depot: https://t.co/zBfNw0HqjB
— Miles Brundage (@Miles_Brundage) April 16, 2021
4. Geometry-Free View Synthesis: Transformers and no 3D Priors
Robin Rombach, Patrick Esser, Björn Ommer
Is a geometric model required to synthesize novel views from a single image? Being bound to local convolutions, CNNs need explicit 3D biases to model geometric transformations. In contrast, we demonstrate that a transformer-based model can synthesize entirely novel views without any hand-engineered 3D biases. This is achieved by (i) a global attention mechanism for implicitly learning long-range 3D correspondences between source and target views, and (ii) a probabilistic formulation necessary to capture the ambiguity inherent in predicting novel views from a single image, thereby overcoming the limitations of previous approaches that are restricted to relatively small viewpoint changes. We evaluate various ways to integrate 3D priors into a transformer architecture. However, our experiments show that no such geometric priors are required and that the transformer is capable of implicitly learning 3D relationships between images. Furthermore, this approach outperforms the state of the art in terms of visual quality while covering the full distribution of possible realizations. Code is available at https://git.io/JOnwn
Geometry-Free View Synthesis: Transformers and no 3D Priors
— AK (@ak92501) April 16, 2021
pdf: https://t.co/LVJhesZ2NL
abs: https://t.co/mGluIj1Hnt pic.twitter.com/kSS0qHHIt3
Geometry-Free View Synthesis: We don't need no 3D priors. Leave them transformers unbiased!
— Patrick Esser (@pess_r) April 16, 2021
Without coding 3D transformations into the model, they learn to synthesize novel views from a single input image.https://t.co/3LhqNCRPjf pic.twitter.com/5LV6GCSIJS
5. Retrieval Augmentation Reduces Hallucination in Conversation
Kurt Shuster, Spencer Poff, Moya Chen, Douwe Kiela, Jason Weston
Despite showing increasingly human-like conversational abilities, state-of-the-art dialogue models often suffer from factual incorrectness and hallucination of knowledge (Roller et al., 2020). In this work we explore the use of neural-retrieval-in-the-loop architectures - recently shown to be effective in open-domain QA (Lewis et al., 2020b; Izacard and Grave, 2020) - for knowledge-grounded dialogue, a task that is arguably more challenging as it requires querying based on complex multi-turn dialogue context and generating conversationally coherent responses. We study various types of architectures with multiple components - retrievers, rankers, and encoder-decoders - with the goal of maximizing knowledgeability while retaining conversational ability. We demonstrate that our best models obtain state-of-the-art performance on two knowledge-grounded conversational tasks. The models exhibit open-domain conversational capabilities, generalize effectively to scenarios not within the training data, and, as verified by human evaluations, substantially reduce the well-known problem of knowledge hallucination in state-of-the-art chatbots.
(1/2) 🚨 Our new work! 🚨 "Retrieval Augmentation Reduces Hallucination in Conversation" @shtruk @spencerpoff @moyapchen @douwekiela @jasewestonhttps://t.co/r0W6xRuffk
— Jason Weston (@jaseweston) April 16, 2021
We infuse dialogue models with knowledge, significantly reducing hallucinated facts during conversation. pic.twitter.com/1rY1GvYTGg
6. Generating Datasets with Pretrained Language Models
Timo Schick, Hinrich Schütze
To obtain high-quality sentence embeddings from pretrained language models, they must either be augmented with additional pretraining objectives or finetuned on large amounts of labeled text pairs. While the latter approach typically outperforms the former, it requires great human effort to generate suitable datasets of sufficient size. In this paper, we show how large pretrained language models can be leveraged to obtain high-quality embeddings without requiring any labeled data, finetuning or modifications to their pretraining objective: We utilize their generative abilities to generate entire datasets of labeled text pairs from scratch, which can then be used for regular finetuning of much smaller models. Our fully unsupervised approach outperforms strong baselines on several English semantic textual similarity datasets.
🎉 New paper 🎉 In "Generating Datasets with Pretrained Language Models", we introduce DINO🦕 and show how LMs can create entire datasets from scratch if provided with instructions. These datasets can be used to train much smaller models #NLProc
— Timo Schick (@timo_schick) April 16, 2021
📄 Paper: https://t.co/l9InmzRQiD pic.twitter.com/IdNA7aD7zv
Generating Datasets with Pretrained Language Models
— AK (@ak92501) April 16, 2021
pdf: https://t.co/eL8EWzAJkC
abs: https://t.co/nvvUbbMz3c
We utilize their generative abilities to generate entire datasets of labeled text pairs from scratch, which can then be used for regular finetuning of much smaller models pic.twitter.com/Y00ddcL9gf
7. XTREME-R: Towards More Challenging and Nuanced Multilingual Evaluation
Sebastian Ruder, Noah Constant, Jan Botha, Aditya Siddhant, Orhan Firat, Jinlan Fu, Pengfei Liu, Junjie Hu, Graham Neubig, Melvin Johnson
Machine learning has brought striking advances in multilingual natural language processing capabilities over the past year. For example, the latest techniques have improved the state-of-the-art performance on the XTREME multilingual benchmark by more than 13 points. While a sizeable gap to human-level performance remains, improvements have been easier to achieve in some tasks than in others. This paper analyzes the current state of cross-lingual transfer learning and summarizes some lessons learned. In order to catalyze meaningful progress, we extend XTREME to XTREME-R, which consists of an improved set of ten natural language understanding tasks, including challenging language-agnostic retrieval tasks, and covers 50 typologically diverse languages. In addition, we provide a massively multilingual diagnostic suite and fine-grained multi-dataset evaluation capabilities through an interactive public leaderboard to gain a better understanding of such models.
XTREME-R: Towards More Challenging and Nuanced Multilingual Evaluation
— Sebastian Ruder (@seb_ruder) April 16, 2021
We examine the state of multilingual benchmarking and propose an improved benchmark covering more challenging tasks, including a diagnostic and evaluation suite to inform future work.https://t.co/QCppOeNrV4 pic.twitter.com/pR8FRauZH6
8. Self-supervised Video Object Segmentation by Motion Grouping
Charig Yang, Hala Lamdouar, Erika Lu, Andrew Zisserman, Weidi Xie
Animals have evolved highly functional visual systems to understand motion, assisting perception even under complex environments. In this paper, we work towards developing a computer vision system able to segment objects by exploiting motion cues, i.e. motion segmentation. We make the following contributions: First, we introduce a simple variant of the Transformer to segment optical flow frames into primary objects and the background. Second, we train the architecture in a self-supervised manner, i.e. without using any manual annotations. Third, we analyze several critical components of our method and conduct thorough ablation studies to validate their necessity. Fourth, we evaluate the proposed architecture on public benchmarks (DAVIS2016, SegTrackv2, and FBMS59). Despite using only optical flow as input, our approach achieves superior or comparable results to previous state-of-the-art self-supervised methods, while being an order of magnitude faster. We additionally evaluate on a challenging camouflage dataset (MoCA), significantly outperforming the other self-supervised approaches, and comparing favourably to the top supervised approach, highlighting the importance of motion cues, and the potential bias towards visual appearance in existing video segmentation models.
Self-supervised Video Object Segmentation by Motion Grouping
— AK (@ak92501) April 16, 2021
pdf: https://t.co/vYqdN0vMYA
abs: https://t.co/R1LPdDRINv
project page: https://t.co/eCgbbET1Lk pic.twitter.com/OUds38OOdT
9. Cross-Domain Label-Adaptive Stance Detection
Momchil Hardalov, Arnav Arora, Preslav Nakov, Isabelle Augenstein
Stance detection concerns the classification of a writer’s viewpoint towards a target. There are different task variants, e.g., stance of a tweet vs. a full article, or stance with respect to a claim vs. an (implicit) topic. Moreover, task definitions vary, which includes the label inventory, the data collection, and the annotation protocol. All these aspects hinder cross-domain studies, as they require changes to standard domain adaptation approaches. In this paper, we perform an in-depth analysis of 16 stance detection datasets, and we explore the possibility for cross-domain learning from them. Moreover, we propose an end-to-end unsupervised framework for out-of-domain prediction of unseen, user-defined labels. In particular, we combine domain adaptation techniques such as mixture of experts and domain-adversarial training with label embeddings, and we demonstrate sizable performance gains over strong baselines — both (i) in-domain, i.e., for seen targets, and (ii) out-of-domain, i.e., for unseen targets. Finally, we perform an exhaustive analysis of the cross-domain results, and we highlight the important factors influencing the model performance.
New #NLProc paper preprint 📝, in which we present MoLE (Mixture-of-Experts w. Label Embeddings)
— Isabelle Augenstein (@IAugenstein) April 16, 2021
👷unsupervised out-of-domain prediction of user-defined labels
🔧parameter-efficient
🚀sizable performance gains in an eval on 16 stance detection datasetshttps://t.co/cOrBw81Zpq pic.twitter.com/5VMaQRp0NX
10. Auto-Tuned Sim-to-Real Transfer
Yuqing Du, Olivia Watkins, Trevor Darrell, Pieter Abbeel, Deepak Pathak
- retweets: 347, favorites: 108 (04/17/2021 10:37:41)
- links: abs | pdf
- cs.RO | cs.AI | cs.CV | cs.HC | cs.LG
Policies trained in simulation often fail when transferred to the real world due to the `reality gap’ where the simulator is unable to accurately capture the dynamics and visual properties of the real world. Current approaches to tackle this problem, such as domain randomization, require prior knowledge and engineering to determine how much to randomize system parameters in order to learn a policy that is robust to sim-to-real transfer while also not being too conservative. We propose a method for automatically tuning simulator system parameters to match the real world using only raw RGB images of the real world without the need to define rewards or estimate state. Our key insight is to reframe the auto-tuning of parameters as a search problem where we iteratively shift the simulation system parameters to approach the real-world system parameters. We propose a Search Param Model (SPM) that, given a sequence of observations and actions and a set of system parameters, predicts whether the given parameters are higher or lower than the true parameters used to generate the observations. We evaluate our method on multiple robotic control tasks in both sim-to-sim and sim-to-real transfer, demonstrating significant improvement over naive domain randomization. Project videos and code at https://yuqingd.github.io/autotuned-sim2real/
Auto-Tuned Sim-to-Real Transfer
— AK (@ak92501) April 16, 2021
pdf: https://t.co/xlzkKXlrqk
abs: https://t.co/PpH7u8N2f6
project page: https://t.co/xJwwAo72ed
github: https://t.co/DXQs9MuPq6 pic.twitter.com/Ak0VMEQ31o
Auto-Tuned Sim-to-Real Transfer https://t.co/YZrhK5b1cyhttps://t.co/5IxDFYUnkRhttps://t.co/xaTaW2ND0V pic.twitter.com/KKGLNKQmJI
— sim2real (@sim2realAIorg) April 16, 2021
11. Image Super-Resolution via Iterative Refinement
Chitwan Saharia, Jonathan Ho, William Chan, Tim Salimans, David J. Fleet, Mohammad Norouzi
We present SR3, an approach to image Super-Resolution via Repeated Refinement. SR3 adapts denoising diffusion probabilistic models to conditional image generation and performs super-resolution through a stochastic denoising process. Inference starts with pure Gaussian noise and iteratively refines the noisy output using a U-Net model trained on denoising at various noise levels. SR3 exhibits strong performance on super-resolution tasks at different magnification factors, on faces and natural images. We conduct human evaluation on a standard 8X face super-resolution task on CelebA-HQ, comparing with SOTA GAN methods. SR3 achieves a fool rate close to 50%, suggesting photo-realistic outputs, while GANs do not exceed a fool rate of 34%. We further show the effectiveness of SR3 in cascaded image generation, where generative models are chained with super-resolution models, yielding a competitive FID score of 11.3 on ImageNet.
Image Super-Resolution via Iterative Refinement
— AK (@ak92501) April 16, 2021
pdf: https://t.co/gzc7bPfYqy
abs: https://t.co/GNRDXh2I6P pic.twitter.com/eViT6nX6L3
12. mlf-core: a framework for deterministic machine learning
Lukas Heumos, Philipp Ehmele, Kevin Menden, Luis Kuhn Cuellar, Edmund Miller, Steffen Lemke, Gisela Gabernet, Sven Nahnsen
- retweets: 353, favorites: 68 (04/17/2021 10:37:42)
- links: abs | pdf
- cs.MS | cs.LG | q-bio.QM | stat.ML
Machine learning has shown extensive growth in recent years. However, previously existing studies highlighted a reproducibility crisis in machine learning. The reasons for irreproducibility are manifold. Major machine learning libraries default to the usage of non-deterministic algorithms based on atomic operations. Solely fixing all random seeds is not sufficient for deterministic machine learning. To overcome this shortcoming, various machine learning libraries released deterministic counterparts to the non-deterministic algorithms. We evaluated the effect of these algorithms on determinism and runtime. Based on these results, we formulated a set of requirements for reproducible machine learning and developed a new software solution, the mlf-core ecosystem, which aids machine learning projects to meet and keep these requirements. We applied mlf-core to develop fully reproducible models in various biomedical fields including a single cell autoencoder with TensorFlow, a PyTorch-based U-Net model for liver-tumor segmentation in CT scans, and a liver cancer classifier based on gene expression profiles with XGBoost.
Super excited to be announcing my very first first author preprint: https://t.co/ooI8e02Ln5. We tackled the reproducibility problem with GPUs in machine learning caused by non-deterministic algorithms and developed the https://t.co/LqucNu9Fax framework.
— Lukas Heumos (@LukasHeumos) April 16, 2021
Wondering how you can achieve determinism for your deep learning ML models? Try out our mlf-core framework! A project led by @LukasHeumos, in collaboration with @Farewent_, @E_Miller88 and several colleagues @QBIC_tue. Preprint is now out https://t.co/R5ag3om6T5
— Gisela Gabernet (@GGabernet) April 16, 2021
13. Self-Supervised Exploration via Latent Bayesian Surprise
Pietro Mazzaglia, Ozan Catal, Tim Verbelen, Bart Dhoedt
Training with Reinforcement Learning requires a reward function that is used to guide the agent towards achieving its objective. However, designing smooth and well-behaved rewards is in general not trivial and requires significant human engineering efforts. Generating rewards in self-supervised way, by inspiring the agent with an intrinsic desire to learn and explore the environment, might induce more general behaviours. In this work, we propose a curiosity-based bonus as intrinsic reward for Reinforcement Learning, computed as the Bayesian surprise with respect to a latent state variable, learnt by reconstructing fixed random features. We extensively evaluate our model by measuring the agent’s performance in terms of environment exploration, for continuous tasks, and looking at the game scores achieved, for video games. Our model is computationally cheap and empirically shows state-of-the-art performance on several problems. Furthermore, experimenting on an environment with stochastic actions, our approach emerged to be the most resilient to simple stochasticity. Further visualization is available on the project webpage.(https://lbsexploration.github.io/)
Self-Supervised Exploration via Latent Bayesian Surprise
— AK (@ak92501) April 16, 2021
pdf: https://t.co/JVfIzKAovb
abs: https://t.co/b71O2oucc4
project page: https://t.co/11NarhtGvo pic.twitter.com/tFKRTrnORX
ガウス分布によるPOMDPの状態遷移モデルを学習するRLエージェントで未来の観測がある前(prior)と後(posterior)のKL距離を好奇心報酬として探索する https://t.co/rDtahnu6jS
— kzmssk (@kzmssk1599) April 16, 2021
14. ExplaGraphs: An Explanation Graph Generation Task for Structured Commonsense Reasoning
Swarnadeep Saha, Prateek Yadav, Lisa Bauer, Mohit Bansal
Recent commonsense-reasoning tasks are typically discriminative in nature, where a model answers a multiple-choice question for a certain context. Discriminative tasks are limiting because they fail to adequately evaluate the model’s ability to reason and explain predictions with underlying commonsense knowledge. They also allow such models to use reasoning shortcuts and not be “right for the right reasons”. In this work, we present ExplaGraphs, a new generative and structured commonsense-reasoning task (and an associated dataset) of explanation graph generation for stance prediction. Specifically, given a belief and an argument, a model has to predict whether the argument supports or counters the belief and also generate a commonsense-augmented graph that serves as non-trivial, complete, and unambiguous explanation for the predicted stance. The explanation graphs for our dataset are collected via crowdsourcing through a novel Collect-Judge-And-Refine graph collection framework that improves the graph quality via multiple rounds of verification and refinement. A significant 83% of our graphs contain external commonsense nodes with diverse structures and reasoning depths. We also propose a multi-level evaluation framework that checks for the structural and semantic correctness of the generated graphs and their plausibility with human-written graphs. We experiment with state-of-the-art text generation models like BART and T5 to generate explanation graphs and observe that there is a large gap with human performance, thereby encouraging useful future work for this new commonsense graph-based explanation generation task.
Excited to share our new work on "ExplaGraphs: An Explanation Graph Generation Task for Structured Commonsense Reasoning"! Has been a long effort and a great learning experience too 🙂
— Swarnadeep Saha (@swarnaNLP) April 16, 2021
Joint work w. @prateeky2806 @lbauer119 @mohitban47 @uncnlp
Paper: https://t.co/EZLyow5xBG
1/5 pic.twitter.com/k34Nf9RLsO
15. Hierarchical Learning for Generation with Long Source Sequences
Tobias Rohde, Xiaoxia Wu, Yinhan Liu
One of the challenges for current sequence to sequence (seq2seq) models is processing long sequences, such as those in summarization and document level machine translation tasks. These tasks require the model to reason at the token level as well as the sentence and paragraph level. We design and study a new Hierarchical Attention Transformer-based architecture (HAT) that outperforms standard Transformers on several sequence to sequence tasks. In particular, our model achieves stateof-the-art results on four summarization tasks, including ArXiv, CNN/DM, SAMSum, and AMI, and we push PubMed R1 & R2 SOTA further. Our model significantly outperforms our document-level machine translation baseline by 28 BLEU on the WMT19 EN-DE document translation task. We also investigate what the hierarchical layers learn by visualizing the hierarchical encoder-decoder attention. Finally, we study hierarchical learning on encoder-only pre-training and analyze its performance on classification downstream tasks.
Hierarchical Learning for Generation with Long Source Sequences
— AK (@ak92501) April 16, 2021
pdf: https://t.co/79pMTHrQuH
abs: https://t.co/YmG1WGioOz pic.twitter.com/EPG4EXyRBE
16. Spectrogram Inpainting for Interactive Generation of Instrument Sounds
Théis Bazin, Gaëtan Hadjeres, Philippe Esling, Mikhail Malt
Modern approaches to sound synthesis using deep neural networks are hard to control, especially when fine-grained conditioning information is not available, hindering their adoption by musicians. In this paper, we cast the generation of individual instrumental notes as an inpainting-based task, introducing novel and unique ways to iteratively shape sounds. To this end, we propose a two-step approach: first, we adapt the VQ-VAE-2 image generation architecture to spectrograms in order to convert real-valued spectrograms into compact discrete codemaps, we then implement token-masked Transformers for the inpainting-based generation of these codemaps. We apply the proposed architecture on the NSynth dataset on masked resampling tasks. Most crucially, we open-source an interactive web interface to transform sounds by inpainting, for artists and practitioners alike, opening up to new, creative uses.
Spectrogram Inpainting for Interactive Generation of Instrument Sounds
— AK (@ak92501) April 16, 2021
pdf: https://t.co/n8jNJtgoa7
abs: https://t.co/Tw8KfqHXK6
project page: https://t.co/m9H7OxeE1E
github: https://t.co/447aLH5co0 pic.twitter.com/D5IhtK9mCI
17. SCALE: Modeling Clothed Humans with a Surface Codec of Articulated Local Elements
Qianli Ma, Shunsuke Saito, Jinlong Yang, Siyu Tang, Michael J. Black
Learning to model and reconstruct humans in clothing is challenging due to articulation, non-rigid deformation, and varying clothing types and topologies. To enable learning, the choice of representation is the key. Recent work uses neural networks to parameterize local surface elements. This approach captures locally coherent geometry and non-planar details, can deal with varying topology, and does not require registered training data. However, naively using such methods to model 3D clothed humans fails to capture fine-grained local deformations and generalizes poorly. To address this, we present three key innovations: First, we deform surface elements based on a human body model such that large-scale deformations caused by articulation are explicitly separated from topological changes and local clothing deformations. Second, we address the limitations of existing neural surface elements by regressing local geometry from local features, significantly improving the expressiveness. Third, we learn a pose embedding on a 2D parameterization space that encodes posed body geometry, improving generalization to unseen poses by reducing non-local spurious correlations. We demonstrate the efficacy of our surface representation by learning models of complex clothing from point clouds. The clothing can change topology and deviate from the topology of the body. Once learned, we can animate previously unseen motions, producing high-quality point clouds, from which we generate realistic images with neural rendering. We assess the importance of each technical contribution and show that our approach outperforms the state-of-the-art methods in terms of reconstruction accuracy and inference time. The code is available for research purposes at https://qianlim.github.io/SCALE .
SCALE: Modeling Clothed Humans with a Surface Codec of Articulated Local Elements
— AK (@ak92501) April 16, 2021
pdf: https://t.co/fwOcDrW8jh
abs: https://t.co/0iVXAIg31u
project page: https://t.co/xHr4fKRp1F pic.twitter.com/LRhwr2KMnc
18. Unmasking the Mask — Evaluating Social Biases in Masked Language Models
Masahiro Kaneko, Danushka Bollegala
Masked Language Models (MLMs) have shown superior performances in numerous downstream NLP tasks when used as text encoders. Unfortunately, MLMs also demonstrate significantly worrying levels of social biases. We show that the previously proposed evaluation metrics for quantifying the social biases in MLMs are problematic due to following reasons: (1) prediction accuracy of the masked tokens itself tend to be low in some MLMs, which raises questions regarding the reliability of the evaluation metrics that use the (pseudo) likelihood of the predicted tokens, and (2) the correlation between the prediction accuracy of the mask and the performance in downstream NLP tasks is not taken into consideration, and (3) high frequency words in the training data are masked more often, introducing noise due to this selection bias in the test cases. To overcome the above-mentioned disfluencies, we propose All Unmasked Likelihood (AUL), a bias evaluation measure that predicts all tokens in a test case given the MLM embedding of the unmasked input. We find that AUL accurately detects different types of biases in MLMs. We also propose AUL with attention weights (AULA) to evaluate tokens based on their importance in a sentence. However, unlike AUL and AULA, previously proposed bias evaluation measures for MLMs systematically overestimate the measured biases, and are heavily influenced by the unmasked tokens in the context.
BERTやALBERTなどのMLMに学習されている差別的なバイアスを評価する手法を提案した論文をarXivにアップしました.@Bollegala 先生との研究です.
— Masahiro Kaneko (@MasahiroKaneko_) April 16, 2021
論文:https://t.co/eFSDRQJxzz
日本語のブログ:https://t.co/6HTu2vDCAc
19. Robust Generalised Bayesian Inference for Intractable Likelihoods
Takuo Matsubara, Jeremias Knoblauch, François-Xavier Briol, Chris. J. Oates
- retweets: 90, favorites: 89 (04/17/2021 10:37:44)
- links: abs | pdf
- stat.ME | math.ST | stat.CO | stat.ML
Generalised Bayesian inference updates prior beliefs using a loss function, rather than a likelihood, and can therefore be used to confer robustness against possible misspecification of the likelihood. Here we consider generalised Bayesian inference with a Stein discrepancy as a loss function, motivated by applications in which the likelihood contains an intractable normalisation constant. In this context, the Stein discrepancy circumvents evaluation of the normalisation constant and produces generalised posteriors that are either closed form or accessible using standard Markov chain Monte Carlo. On a theoretical level, we show consistency, asymptotic normality, and bias-robustness of the generalised posterior, highlighting how these properties are impacted by the choice of Stein discrepancy. Then, we provide numerical experiments on a range of intractable distributions, including applications to kernel-based exponential family models and non-Gaussian graphical models.
Very excited by our recent paper on "Robust Generalised Bayesian Inference for Intractable Likelihoods" with @TakuoMatsubara, @LauchLab and Chris Oates; you can find it here: https://t.co/VfZHKMtaP6. [1/10]
— François-Xavier Briol (@fx_briol) April 16, 2021
20. Quantifying Gender Bias Towards Politicians in Cross-Lingual Language Models
Karolina Stańczak, Sagnik Ray Choudhury, Tiago Pimentel, Ryan Cotterell, Isabelle Augenstein
While the prevalence of large pre-trained language models has led to significant improvements in the performance of NLP systems, recent research has demonstrated that these models inherit societal biases extant in natural language. In this paper, we explore a simple method to probe pre-trained language models for gender bias, which we use to effect a multi-lingual study of gender bias towards politicians. We construct a dataset of 250k politicians from most countries in the world and quantify adjective and verb usage around those politicians’ names as a function of their gender. We conduct our study in 7 languages across 6 different language modeling architectures. Our results demonstrate that stance towards politicians in pre-trained language models is highly dependent on the language used. Finally, contrary to previous findings, our study suggests that larger language models do not tend to be significantly more gender-biased than smaller ones.
New #NLProc paper: quantifying gender bias towards politicians in X-ling language models
— Isabelle Augenstein (@IAugenstein) April 16, 2021
tl;dr:
🗣️gender bias is highly lang-dependent
🤔larger models are not significantly more gender-biasedhttps://t.co/TdSQUJKNIb#NLProc @karstanczak @sagnikrayc @tpimentelms @ryandcotterell pic.twitter.com/k9h5bPnbO9
21. Unlocking Compositional Generalization in Pre-trained Models Using Intermediate Representations
Jonathan Herzig, Peter Shaw, Ming-Wei Chang, Kelvin Guu, Panupong Pasupat, Yuan Zhang
Sequence-to-sequence (seq2seq) models are prevalent in semantic parsing, but have been found to struggle at out-of-distribution compositional generalization. While specialized model architectures and pre-training of seq2seq models have been proposed to address this issue, the former often comes at the cost of generality and the latter only shows limited success. In this paper, we study the impact of intermediate representations on compositional generalization in pre-trained seq2seq models, without changing the model architecture at all, and identify key aspects for designing effective representations. Instead of training to directly map natural language to an executable form, we map to a reversible or lossy intermediate representation that has stronger structural correspondence with natural language. The combination of our proposed intermediate representations and pre-trained models is surprisingly effective, where the best combinations obtain a new state-of-the-art on CFQ (+14.8 accuracy points) and on the template-splits of three text-to-SQL datasets (+15.0 to +19.4 accuracy points). This work highlights that intermediate representations provide an important and potentially overlooked degree of freedom for improving the compositional generalization abilities of pre-trained seq2seq models.
What is the impact of intermediate representations on compositional generalization?
— Jonathan Herzig (@jonherzig) April 16, 2021
We find them to be surprisingly effective in improving semantic parsing generalization for pre-trained LMs!
w/ @ptshaw2 @mchang21 @kelvin_guu @IcePasupat Yuan Zhang
https://t.co/F1EKEkRtEp
1/4 pic.twitter.com/4plVJoju15
22. Camera View Adjustment Prediction for Improving Image Composition
Yu-Chuan Su, Raviteja Vemulapalli, Ben Weiss, Chun-Te Chu, Philip Andrew Mansfield, Lior Shapira, Colvin Pitts
Image composition plays an important role in the quality of a photo. However, not every camera user possesses the knowledge and expertise required for capturing well-composed photos. While post-capture cropping can improve the composition sometimes, it does not work in many common scenarios in which the photographer needs to adjust the camera view to capture the best shot. To address this issue, we propose a deep learning-based approach that provides suggestions to the photographer on how to adjust the camera view before capturing. By optimizing the composition before a photo is captured, our system helps photographers to capture better photos. As there is no publicly-available dataset for this task, we create a view adjustment dataset by repurposing existing image cropping datasets. Furthermore, we propose a two-stage semi-supervised approach that utilizes both labeled and unlabeled images for training a view adjustment model. Experiment results show that the proposed semi-supervised approach outperforms the corresponding supervised alternatives, and our user study results show that the suggested view adjustment improves image composition 79% of the time.
Camera View Adjustment Prediction for Improving Image Composition
— AK (@ak92501) April 16, 2021
pdf: https://t.co/uAELAxpd5x
abs: https://t.co/rIiWI3fLwI pic.twitter.com/VJ17zYRG7D
23. A Simple Baseline for StyleGAN Inversion
Tianyi Wei, Dongdong Chen, Wenbo Zhou, Jing Liao, Weiming Zhang, Lu Yuan, Gang Hua, Nenghai Yu
This paper studies the problem of StyleGAN inversion, which plays an essential role in enabling the pretrained StyleGAN to be used for real facial image editing tasks. This problem has the high demand for quality and efficiency. Existing optimization-based methods can produce high quality results, but the optimization often takes a long time. On the contrary, forward-based methods are usually faster but the quality of their results is inferior. In this paper, we present a new feed-forward network for StyleGAN inversion, with significant improvement in terms of efficiency and quality. In our inversion network, we introduce: 1) a shallower backbone with multiple efficient heads across scales; 2) multi-layer identity loss and multi-layer face parsing loss to the loss function; and 3) multi-stage refinement. Combining these designs together forms a simple and efficient baseline method which exploits all benefits of optimization-based and forward-based methods. Quantitative and qualitative results show that our method performs better than existing forward-based methods and comparably to state-of-the-art optimization-based methods, while maintaining the high efficiency as well as forward-based methods. Moreover, a number of real image editing applications demonstrate the efficacy of our method. Our project page is ~\url{https://wty-ustc.github.io/inversion}.
A Simple Baseline for StyleGAN Inversion
— AK (@ak92501) April 16, 2021
pdf: https://t.co/4zYb3nimE9
abs: https://t.co/EspSXEXOmV
project page: https://t.co/2b80PJqodY pic.twitter.com/UuEZN0H3gI
24. Sometimes We Want Translationese
Prasanna Parthasarathi, Koustuv Sinha, Joelle Pineau, Adina Williams
Rapid progress in Neural Machine Translation (NMT) systems over the last few years has been driven primarily towards improving translation quality, and as a secondary focus, improved robustness to input perturbations (e.g. spelling and grammatical mistakes). While performance and robustness are important objectives, by over-focusing on these, we risk overlooking other important properties. In this paper, we draw attention to the fact that for some applications, faithfulness to the original (input) text is important to preserve, even if it means introducing unusual language patterns in the (output) translation. We propose a simple, novel way to quantify whether an NMT system exhibits robustness and faithfulness, focusing on the case of word-order perturbations. We explore a suite of functions to perturb the word order of source sentences without deleting or injecting tokens, and measure the effects on the target side in terms of both robustness and faithfulness. Across several experimental conditions, we observe a strong tendency towards robustness rather than faithfulness. These results allow us to better understand the trade-off between faithfulness and robustness in NMT, and opens up the possibility of developing systems where users have more autonomy and control in selecting which property is best suited for their use case.
📢Yes! Sometimes the translations have to assume the input word-order to be as is, despite being ungrammatical. We propose a suite of perturbations and metrics to verify if a NMT system is robust beyond necessary. @koustuv Joelle @adinamwilliams.#NLProc https://t.co/Enzj5ZeT82 pic.twitter.com/oFShV0JcTp
— Prasanna Parthasarathi (@prasannapartha) April 16, 2021
25. See through Gradients: Image Batch Recovery via GradInversion
Hongxu Yin, Arun Mallya, Arash Vahdat, Jose M. Alvarez, Jan Kautz, Pavlo Molchanov
Training deep neural networks requires gradient estimation from data batches to update parameters. Gradients per parameter are averaged over a set of data and this has been presumed to be safe for privacy-preserving training in joint, collaborative, and federated learning applications. Prior work only showed the possibility of recovering input data given gradients under very restrictive conditions - a single input point, or a network with no non-linearities, or a small 32x32 px input batch. Therefore, averaging gradients over larger batches was thought to be safe. In this work, we introduce GradInversion, using which input images from a larger batch (8 - 48 images) can also be recovered for large networks such as ResNets (50 layers), on complex datasets such as ImageNet (1000 classes, 224x224 px). We formulate an optimization task that converts random noise into natural images, matching gradients while regularizing image fidelity. We also propose an algorithm for target class label recovery given gradients. We further propose a group consistency regularization framework, where multiple agents starting from different random seeds work together to find an enhanced reconstruction of original data batch. We show that gradients encode a surprisingly large amount of information, such that all the individual images can be recovered with high fidelity via GradInversion, even for complex datasets, deep networks, and large batch sizes.
See through Gradients: Image Batch Recovery via GradInversion
— AK (@ak92501) April 16, 2021
pdf: https://t.co/o596rbj3jH
abs: https://t.co/zq6V2wvKpK pic.twitter.com/Mpoi82ctle
26. Exact and Approximate Hierarchical Clustering Using A*
Craig S. Greenberg, Sebastian Macaluso, Nicholas Monath, Avinava Dubey, Patrick Flaherty, Manzil Zaheer, Amr Ahmed, Kyle Cranmer, Andrew McCallum
- retweets: 30, favorites: 28 (04/17/2021 10:37:45)
- links: abs | pdf
- cs.LG | cs.DS | physics.data-an | stat.ML
Hierarchical clustering is a critical task in numerous domains. Many approaches are based on heuristics and the properties of the resulting clusterings are studied post hoc. However, in several applications, there is a natural cost function that can be used to characterize the quality of the clustering. In those cases, hierarchical clustering can be seen as a combinatorial optimization problem. To that end, we introduce a new approach based on A* search. We overcome the prohibitively large search space by combining A* with a novel \emph{trellis} data structure. This combination results in an exact algorithm that scales beyond previous state of the art, from a search space with trees to trees, and an approximate algorithm that improves over baselines, even in enormous search spaces that contain more than trees. We empirically demonstrate that our method achieves substantially higher quality results than baselines for a particle physics use case and other clustering benchmarks. We describe how our method provides significantly improved theoretical bounds on the time and space complexity of A* for clustering.
New! We extend our work on dynamic programming algorithms and novel data structures for probabilistic hierarchical clustering to include A* search@andrewmccallum @nicholasmonath
— Kyle Cranmer (@KyleCranmer) April 16, 2021
Sebastian Macaluso, Craig Greenberg, & new collaborators at googlehttps://t.co/H6M7CiHv1A pic.twitter.com/b4ov2zzlZi
27. NT5?! Training T5 to Perform Numerical Reasoning
Peng-Jian Yang, Ying Ting Chen, Yuechan Chen, Daniel Cer
Numerical reasoning over text (NRoT) presents unique challenges that are not well addressed by existing pre-training objectives. We explore five sequential training schedules that adapt a pre-trained T5 model for NRoT. Our final model is adapted from T5, but further pre-trained on three datasets designed to strengthen skills necessary for NRoT and general reading comprehension before being fine-tuned on the Discrete Reasoning over Text (DROP) dataset. The training improves DROP’s adjusted F1 performance (a numeracy-focused score) from 45.90 to 70.83. Our model closes in on GenBERT (72.4), a custom BERT-Base model using the same datasets with significantly more parameters. We show that training the T5 multitasking framework with multiple numerical reasoning datasets of increasing difficulty, good performance on DROP can be achieved without manually engineering partitioned functionality between distributed and symbol modules.
28. Points as Queries: Weakly Semi-supervised Object Detection by Points
Liangyu Chen, Tong Yang, Xiangyu Zhang, Wei Zhang, Jian Sun
We propose a novel point annotated setting for the weakly semi-supervised object detection task, in which the dataset comprises small fully annotated images and large weakly annotated images by points. It achieves a balance between tremendous annotation burden and detection performance. Based on this setting, we analyze existing detectors and find that these detectors have difficulty in fully exploiting the power of the annotated points. To solve this, we introduce a new detector, Point DETR, which extends DETR by adding a point encoder. Extensive experiments conducted on MS-COCO dataset in various data settings show the effectiveness of our method. In particular, when using 20% fully labeled data from COCO, our detector achieves a promising performance, 33.3 AP, which outperforms a strong baseline (FCOS) by 2.0 AP, and we demonstrate the point annotations bring over 10 points in various AR metrics.
Points as Queries: Weakly Semi-supervised Object Detection by Pointshttps://t.co/1UnSN8U4zx pic.twitter.com/EWOZ8l0ZtL
— phalanx (@ZFPhalanx) April 16, 2021