1. Learning from others’ mistakes: Avoiding dataset biases without modeling them
Victor Sanh, Thomas Wolf, Yonatan Belinkov, Alexander M. Rush
State-of-the-art natural language processing (NLP) models often learn to model dataset biases and surface form correlations instead of features that target the intended underlying task. Previous work has demonstrated effective methods to circumvent these issues when knowledge of the bias is available. We consider cases where the bias issues may not be explicitly identified, and show a method for training models that learn to ignore these problematic correlations. Our approach relies on the observation that models with limited capacity primarily learn to exploit biases in the dataset. We can leverage the errors of such limited capacity models to train a more robust model in a product of experts, thus bypassing the need to hand-craft a biased model. We show the effectiveness of this method to retain improvements in out-of-distribution settings even if no particular bias is targeted by the biased model.
🚨New pre-print on avoiding dataset biases
— Victor Sanh (@SanhEstPasMoi) December 3, 2020
We show a method to train a model to ignore dataset biases without explicitly identifying/modeling them by learning from the errors of a “dumb” model.
Link: https://t.co/UqodTR58P1
W/ 🤩 collaborators @Thom_Wolf, @boknilev & @srush_nlp pic.twitter.com/RWcNscxdmF
2. Learning Spatial Attention for Face Super-Resolution
Chaofeng Chen, Dihong Gong, Hao Wang, Zhifeng Li, Kwan-Yee K. Wong
General image super-resolution techniques have difficulties in recovering detailed face structures when applying to low resolution face images. Recent deep learning based methods tailored for face images have achieved improved performance by jointly trained with additional task such as face parsing and landmark prediction. However, multi-task learning requires extra manually labeled data. Besides, most of the existing works can only generate relatively low resolution face images (e.g., ), and their applications are therefore limited. In this paper, we introduce a novel SPatial Attention Residual Network (SPARNet) built on our newly proposed Face Attention Units (FAUs) for face super-resolution. Specifically, we introduce a spatial attention mechanism to the vanilla residual blocks. This enables the convolutional layers to adaptively bootstrap features related to the key face structures and pay less attention to those less feature-rich regions. This makes the training more effective and efficient as the key face structures only account for a very small portion of the face image. Visualization of the attention maps shows that our spatial attention network can capture the key face structures well even for very low resolution faces (e.g., ). Quantitative comparisons on various kinds of metrics (including PSNR, SSIM, identity similarity, and landmark detection) demonstrate the superiority of our method over current state-of-the-arts. We further extend SPARNet with multi-scale discriminators, named as SPARNetHD, to produce high resolution results (i.e., ). We show that SPARNetHD trained with synthetic data cannot only produce high quality and high resolution outputs for synthetically degraded face images, but also show good generalization ability to real world low quality face images. Codes are available at \url{https://github.com/chaofengc/Face-SPARNet}.
Learning Spatial Attention for Face Super-Resolution
— AK (@ak92501) December 3, 2020
pdf: https://t.co/dhnPMOuNod
abs: https://t.co/G0kYSNbPMb pic.twitter.com/8NbwEXqwhh
3. MaX-DeepLab: End-to-End Panoptic Segmentation with Mask Transformers
Huiyu Wang, Yukun Zhu, Hartwig Adam, Alan Yuille, Liang-Chieh Chen
We present MaX-DeepLab, the first end-to-end model for panoptic segmentation. Our approach simplifies the current pipeline that depends heavily on surrogate sub-tasks and hand-designed components, such as box detection, non-maximum suppression, thing-stuff merging, etc. Although these sub-tasks are tackled by area experts, they fail to comprehensively solve the target task. By contrast, our MaX-DeepLab directly predicts class-labeled masks with a mask transformer, and is trained with a panoptic quality inspired loss via bipartite matching. Our mask transformer employs a dual-path architecture that introduces a global memory path in addition to a CNN path, allowing direct communication with any CNN layers. As a result, MaX-DeepLab shows a significant 7.1% PQ gain in the box-free regime on the challenging COCO dataset, closing the gap between box-based and box-free methods for the first time. A small variant of MaX-DeepLab improves 3.0% PQ over DETR with similar parameters and M-Adds. Furthermore, MaX-DeepLab, without test time augmentation, achieves new state-of-the-art 51.3% PQ on COCO test-dev set.
MaX-DeepLab: End-to-End Panoptic Segmentation with Mask Transformers
— AK (@ak92501) December 3, 2020
pdf: https://t.co/9ON8lHTegA
abs: https://t.co/wVExYKlHE2 pic.twitter.com/2ITfKS2ZVb
4. A Photogrammetry-based Framework to Facilitate Image-based Modeling and Automatic Camera Tracking
Sebastian Bullinger, Christoph Bodensteiner, Michael Arens
We propose a framework that extends Blender to exploit Structure from Motion (SfM) and Multi-View Stereo (MVS) techniques for image-based modeling tasks such as sculpting or camera and motion tracking. Applying SfM allows us to determine camera motions without manually defining feature tracks or calibrating the cameras used to capture the image data. With MVS we are able to automatically compute dense scene models, which is not feasible with the built-in tools of Blender. Currently, our framework supports several state-of-the-art SfM and MVS pipelines. The modular system design enables us to integrate further approaches without additional effort. The framework is publicly available as an open source software package.
A Photogrammetry-based Framework to Facilitate Image-based Modeling and Automatic Camera Tracking
— AK (@ak92501) December 3, 2020
pdf: https://t.co/LIBQAIaMGY
abs: https://t.co/kaCccnIdXR
github: https://t.co/Y3HncPAeUk pic.twitter.com/MhYEt6gM05
5. pi-GAN: Periodic Implicit Generative Adversarial Networks for 3D-Aware Image Synthesis
Eric R. Chan, Marco Monteiro, Petr Kellnhofer, Jiajun Wu, Gordon Wetzstein
We have witnessed rapid progress on 3D-aware image synthesis, leveraging recent advances in generative visual models and neural rendering. Existing approaches however fall short in two ways: first, they may lack an underlying 3D representation or rely on view-inconsistent rendering, hence synthesizing images that are not multi-view consistent; second, they often depend upon representation network architectures that are not expressive enough, and their results thus lack in image quality. We propose a novel generative model, named Periodic Implicit Generative Adversarial Networks (-GAN or pi-GAN), for high-quality 3D-aware image synthesis. -GAN leverages neural representations with periodic activation functions and volumetric rendering to represent scenes as view-consistent 3D representations with fine detail. The proposed approach obtains state-of-the-art results for 3D-aware image synthesis with multiple real and synthetic datasets.
pi-GAN: Periodic Implicit Generative Adversarial Networks for 3D-Aware Image Synthesis
— AK (@ak92501) December 3, 2020
pdf: https://t.co/LP716AJNfm
abs: https://t.co/a6ylN0EEEy pic.twitter.com/MQysrSAs53