1. iMAP: Implicit Mapping and Positioning in Real-Time
Edgar Sucar, Shikun Liu, Joseph Ortiz, Andrew J. Davison
We show for the first time that a multilayer perceptron (MLP) can serve as the only scene representation in a real-time SLAM system for a handheld RGB-D camera. Our network is trained in live operation without prior data, building a dense, scene-specific implicit 3D model of occupancy and colour which is also immediately used for tracking. Achieving real-time SLAM via continual training of a neural network against a live image stream requires significant innovation. Our iMAP algorithm uses a keyframe structure and multi-processing computation flow, with dynamic information-guided pixel sampling for speed, with tracking at 10 Hz and global map updating at 2 Hz. The advantages of an implicit MLP over standard dense SLAM techniques include efficient geometry representation with automatic detail control and smooth, plausible filling-in of unobserved regions such as the back surfaces of objects.
Excited to share iMAP, first real-time SLAM system to use an implicit scene network as map representation.
— Edgar Sucar (@SucarEdgar) March 24, 2021
Work with: @liu_shikun, @joeaortiz, @AjdDavison
Project page: https://t.co/Tagk4jFN2M
Paper: https://t.co/OQA1QdLY4Q pic.twitter.com/KG0cY68MXn
2. Generative Minimization Networks: Training GANs Without Competition
Paulina Grnarova, Yannic Kilcher, Kfir Y. Levy, Aurelien Lucchi, Thomas Hofmann
Many applications in machine learning can be framed as minimization problems and solved efficiently using gradient-based techniques. However, recent applications of generative models, particularly GANs, have triggered interest in solving min-max games for which standard optimization techniques are often not suitable. Among known problems experienced by practitioners is the lack of convergence guarantees or convergence to a non-optimum cycle. At the heart of these problems is the min-max structure of the GAN objective which creates non-trivial dependencies between the players. We propose to address this problem by optimizing a different objective that circumvents the min-max structure using the notion of duality gap from game theory. We provide novel convergence guarantees on this objective and demonstrate why the obtained limit point solves the problem better than known techniques.
Generative Minimization Networks: Training GANs Without Competition
— AK (@ak92501) March 24, 2021
pdf: https://t.co/xreEHRgww1
abs: https://t.co/ZbJLAbh5j8 pic.twitter.com/WEc4YgQ4WN
3. Scaling Local Self-Attention For Parameter Efficient Visual Backbones
Ashish Vaswani, Prajit Ramachandran, Aravind Srinivas, Niki Parmar, Blake Hechtman, Jonathon Shlens
Self-attention has the promise of improving computer vision systems due to parameter-independent scaling of receptive fields and content-dependent interactions, in contrast to parameter-dependent scaling and content-independent interactions of convolutions. Self-attention models have recently been shown to have encouraging improvements on accuracy-parameter trade-offs compared to baseline convolutional models such as ResNet-50. In this work, we aim to develop self-attention models that can outperform not just the canonical baseline models, but even the high-performing convolutional models. We propose two extensions to self-attention that, in conjunction with a more efficient implementation of self-attention, improve the speed, memory usage, and accuracy of these models. We leverage these improvements to develop a new self-attention model family, \emph{HaloNets}, which reach state-of-the-art accuracies on the parameter-limited setting of the ImageNet classification benchmark. In preliminary transfer learning experiments, we find that HaloNet models outperform much larger models and have better inference performance. On harder tasks such as object detection and instance segmentation, our simple local self-attention and convolutional hybrids show improvements over very strong baselines. These results mark another step in demonstrating the efficacy of self-attention models on settings traditionally dominated by convolutional models.
Scaling Local Self-Attention For Parameter Efficient Visual Backbones
— Aran Komatsuzaki (@arankomatsuzaki) March 24, 2021
Self-attention-based HaloNets achieves SotA parameter-accuracy on ImageNet and perform well on object detection.https://t.co/C4wfZJcu1F pic.twitter.com/vmCeJKxTLt
4. Self-Supervised Pretraining Improves Self-Supervised Pretraining
Colorado J. Reed, Xiangyu Yue, Ani Nrusimha, Sayna Ebrahimi, Vivek Vijaykumar, Richard Mao, Bo Li, Shanghang Zhang, Devin Guillory, Sean Metzger, Kurt Keutzer, Trevor Darrell
While self-supervised pretraining has proven beneficial for many computer vision tasks, it requires expensive and lengthy computation, large amounts of data, and is sensitive to data augmentation. Prior work demonstrates that models pretrained on datasets dissimilar to their target data, such as chest X-ray models trained on ImageNet, underperform models trained from scratch. Users that lack the resources to pretrain must use existing models with lower performance. This paper explores Hierarchical PreTraining (HPT), which decreases convergence time and improves accuracy by initializing the pretraining process with an existing pretrained model. Through experimentation on 16 diverse vision datasets, we show HPT converges up to 80x faster, improves accuracy across tasks, and improves the robustness of the self-supervised pretraining process to changes in the image augmentation policy or amount of pretraining data. Taken together, HPT provides a simple framework for obtaining better pretrained representations with less computational resources.
Self-Supervised Pretraining Improves Self-Supervised Pretraining
— Aran Komatsuzaki (@arankomatsuzaki) March 24, 2021
Proposes HPT, which accelerates convergence and improves accuracy by initializing the pretraining process with an existing pretrained model.
abs: https://t.co/A1LQMLJWeB
code: https://t.co/7K6UK1xRY4 pic.twitter.com/KRceZus2QY
Self-Supervised Pretraining Improves Self-Supervised Pretraining
— AK (@ak92501) March 24, 2021
pdf: https://t.co/fL2mxN1vzX
abs: https://t.co/fPPTidRWaE
github: https://t.co/guywFouV07 pic.twitter.com/NGJv9nyOid
5. Transformers Solve the Limited Receptive Field for Monocular Depth Prediction
Guanglei Yang, Hao Tang, Mingli Ding, Nicu Sebe, Elisa Ricci
While convolutional neural networks have shown a tremendous impact on various computer vision tasks, they generally demonstrate limitations in explicitly modeling long-range dependencies due to the intrinsic locality of the convolution operation. Transformers, initially designed for natural language processing tasks, have emerged as alternative architectures with innate global self-attention mechanisms to capture long-range dependencies. In this paper, we propose TransDepth, an architecture which benefits from both convolutional neural networks and transformers. To avoid the network to loose its ability to capture local-level details due to the adoption of transformers, we propose a novel decoder which employs on attention mechanisms based on gates. Notably, this is the first paper which applies transformers into pixel-wise prediction problems involving continuous labels (i.e., monocular depth prediction and surface normal estimation). Extensive experiments demonstrate that the proposed TransDepth achieves state-of-the-art performance on three challenging datasets. The source code and trained models are available at https://github.com/ygjwd12345/TransDepth.
Transformers Solve the Limited Receptive Field for Monocular Depth Prediction
— AK (@ak92501) March 24, 2021
pdf: https://t.co/hEhkLBLBCB
abs: https://t.co/egR5yL6Ehu
github: https://t.co/lSvOVOKUns pic.twitter.com/0JE7YKfkf1
6. End-to-End Trainable Multi-Instance Pose Estimation with Transformers
Lucas Stoffl, Maxime Vidal, Alexander Mathis
We propose a new end-to-end trainable approach for multi-instance pose estimation by combining a convolutional neural network with a transformer. We cast multi-instance pose estimation from images as a direct set prediction problem. Inspired by recent work on end-to-end trainable object detection with transformers, we use a transformer encoder-decoder architecture together with a bipartite matching scheme to directly regress the pose of all individuals in a given image. Our model, called POse Estimation Transformer (POET), is trained using a novel set-based global loss that consists of a keypoint loss, a keypoint visibility loss, a center loss and a class loss. POET reasons about the relations between detected humans and the full image context to directly predict the poses in parallel. We show that POET can achieve high accuracy on the challenging COCO keypoint detection task. To the best of our knowledge, this model is the first end-to-end trainable multi-instance human pose estimation method.
ℙ𝕆se 𝔼stimation 𝕋ransformer (ℙ𝕆𝔼𝕋): End-to-End Trainable Multi-Instance Pose Estimation with Transformers
— Dr. Mackenzie Mathis (@TrackingActions) March 24, 2021
🔥1st end-to-end trainable multi-human pose estimation method
👏 Super proud of @TrackingPlumes + @LStoffl + @vmaxmc2! cc @amathislab https://t.co/TfILlzrymb pic.twitter.com/jBGhQK92ia
We developed an end-to-end trainable multi-instance pose estimation model with transformers -https://t.co/2HZix5mMzc - great work by PhD student @LStoffl and Master's student @vmaxmc2!!! pic.twitter.com/EI99usQkow
— A. Mathis Lab (@amathislab) March 24, 2021
7. Spatial Intention Maps for Multi-Agent Mobile Manipulation
Jimmy Wu, Xingyuan Sun, Andy Zeng, Shuran Song, Szymon Rusinkiewicz, Thomas Funkhouser
- retweets: 196, favorites: 69 (03/25/2021 09:03:26)
- links: abs | pdf
- cs.RO | cs.AI | cs.CV | cs.LG | cs.MA
The ability to communicate intention enables decentralized multi-agent robots to collaborate while performing physical tasks. In this work, we present spatial intention maps, a new intention representation for multi-agent vision-based deep reinforcement learning that improves coordination between decentralized mobile manipulators. In this representation, each agent’s intention is provided to other agents, and rendered into an overhead 2D map aligned with visual observations. This synergizes with the recently proposed spatial action maps framework, in which state and action representations are spatially aligned, providing inductive biases that encourage emergent cooperative behaviors requiring spatial coordination, such as passing objects to each other or avoiding collisions. Experiments across a variety of multi-agent environments, including heterogeneous robot teams with different abilities (lifting, pushing, or throwing), show that incorporating spatial intention maps improves performance for different mobile manipulation tasks while significantly enhancing cooperative behaviors.
Spatial Intention Maps for Multi-Agent Mobile Manipulation
— AK (@ak92501) March 24, 2021
pdf: https://t.co/urckpHHIyr
abs: https://t.co/GdYymtBekO
project page: https://t.co/laYcqOHAmr
github: https://t.co/1Kc9ag5mJm pic.twitter.com/YvG82bzG8o
8. Detecting Hate Speech with GPT-3
Ke-Li Chiu, Rohan Alexander
Sophisticated language models such as OpenAI’s GPT-3 can generate hateful text that targets marginalized groups. Given this capacity, we are interested in whether large language models can be used to identify hate speech and classify text as sexist or racist? We use GPT-3 to identify sexist and racist text passages with zero-, one-, and few-shot learning. We find that with zero- and one-shot learning, GPT-3 is able to identify sexist or racist text with an accuracy between 48 per cent and 69 per cent. With few-shot learning and an instruction included in the prompt, the model’s accuracy can be as high as 78 per cent. We conclude that large language models have a role to play in hate speech detection, and that with further development language models could be used to counter hate speech and even self-police.
'Detecting Hate Speech with GPT-3' co-authored with @UofTInfoFaculty student Ke-Li Chiu is now available on arXiv and we'd love any feedback that you have:https://t.co/40NEnOBfqM
— Rohan Alexander (@RohanAlexander) March 24, 2021
Thank you to @ghadfield and @TorontoSRI for enabling this paper. pic.twitter.com/y315KZMpxF
9. Instance-level Image Retrieval using Reranking Transformers
Fuwen Tan, Jiangbo Yuan, Vicente Ordonez
Instance-level image retrieval is the task of searching in a large database for images that match an object in a query image. To address this task, systems usually rely on a retrieval step that uses global image descriptors, and a subsequent step that performs domain-specific refinements or reranking by leveraging operations such as geometric verification based on local features. In this work, we propose Reranking Transformers (RRTs) as a general model to incorporate both local and global features to rerank the matching images in a supervised fashion and thus replace the relatively expensive process of geometric verification. RRTs are lightweight and can be easily parallelized so that reranking a set of top matching results can be performed in a single forward-pass. We perform extensive experiments on the Revisited Oxford and Paris datasets, and the Google Landmark v2 dataset, showing that RRTs outperform previous reranking approaches while using much fewer local descriptors. Moreover, we demonstrate that, unlike existing approaches, RRTs can be optimized jointly with the feature extractor, which can lead to feature representations tailored to downstream tasks and further accuracy improvements. Training code and pretrained models will be made public.
Instance-level Image Retrieval using Reranking Transformershttps://t.co/g6l4FRy9wV pic.twitter.com/VCrIiBDqsp
— phalanx (@ZFPhalanx) March 24, 2021
10. Multilingual Autoregressive Entity Linking
Nicola De Cao, Ledell Wu, Kashyap Popat, Mikel Artetxe, Naman Goyal, Mikhail Plekhanov, Luke Zettlemoyer, Nicola Cancedda, Sebastian Riedel, Fabio Petroni
We present mGENRE, a sequence-to-sequence system for the Multilingual Entity Linking (MEL) problem — the task of resolving language-specific mentions to a multilingual Knowledge Base (KB). For a mention in a given language, mGENRE predicts the name of the target entity left-to-right, token-by-token in an autoregressive fashion. The autoregressive formulation allows us to effectively cross-encode mention string and entity names to capture more interactions than the standard dot product between mention and entity vectors. It also enables fast search within a large KB even for mentions that do not appear in mention tables and with no need for large-scale vector indices. While prior MEL works use a single representation for each entity, we match against entity names of as many languages as possible, which allows exploiting language connections between source input and target name. Moreover, in a zero-shot setting on languages with no training data at all, mGENRE treats the target language as a latent variable that is marginalized at prediction time. This leads to over 50% improvements in average accuracy. We show the efficacy of our approach through extensive evaluation including experiments on three popular MEL benchmarks where mGENRE establishes new state-of-the-art results. Code and pre-trained models at https://github.com/facebookresearch/GENRE.
Multilingual Autoregressive Entity Linking
— AK (@ak92501) March 24, 2021
pdf: https://t.co/9aPikbeQTZ
abs: https://t.co/8ge2vz3knd
github: https://t.co/kANlsxEj7X pic.twitter.com/848IJLuFxc
11. Leveraging background augmentations to encourage semantic focus in self-supervised contrastive learning
Chaitanya K. Ryali, David J. Schwab, Ari S. Morcos
Unsupervised representation learning is an important challenge in computer vision, with self-supervised learning methods recently closing the gap to supervised representation learning. An important ingredient in high-performing self-supervised methods is the use of data augmentation by training models to place different augmented views of the same image nearby in embedding space. However, commonly used augmentation pipelines treat images holistically, disregarding the semantic relevance of parts of an image-e.g. a subject vs. a background-which can lead to the learning of spurious correlations. Our work addresses this problem by investigating a class of simple, yet highly effective “background augmentations”, which encourage models to focus on semantically-relevant content by discouraging them from focusing on image backgrounds. Background augmentations lead to substantial improvements (+1-2% on ImageNet-1k) in performance across a spectrum of state-of-the art self-supervised methods (MoCov2, BYOL, SwAV) on a variety of tasks, allowing us to reach within 0.3% of supervised performance. We also demonstrate that background augmentations improve robustness to a number of out of distribution settings, including natural adversarial examples, the backgrounds challenge, adversarial attacks, and ReaL ImageNet.
Leveraging background augmentations to encourage semantic focus in self-supervised contrastive learning
— AK (@ak92501) March 24, 2021
pdf: https://t.co/heJfQ5Ic3A
abs: https://t.co/nwM8yhMjZm pic.twitter.com/dWwlCO2F88
12. Moving from Linear to Conic Markets for Electricity
Anubhav Ratha, Pierre Pinson, Hélène Le Cadre, Ana Virag, Jalal Kazempour
We propose a new forward electricity market framework that admits heterogeneous market participants with second-order cone strategy sets, who accurately express the nonlinearities in their costs and constraints through conic bids, and a network operator facing conic operational constraints. In contrast to the prevalent linear-programming-based electricity markets, we highlight how the inclusion of second-order cone constraints enables uncertainty-, asset- and network-awareness of the market, which is key to the successful transition towards an electricity system based on weather-dependent renewable energy sources. We analyze our general market-clearing proposal using conic duality theory to derive efficient spatially-differentiated prices for the multiple commodities, comprising of energy and flexibility services. Under the assumption of perfect competition, we prove the equivalence of the centrally-solved market-clearing optimization problem to a competitive spatial price equilibrium involving a set of rational and self-interested participants and a price setter. Finally, under common assumptions, we prove that moving towards conic markets does not incur the loss of desirable economic properties of markets, namely market efficiency, cost recovery and revenue adequacy. Our numerical studies focus on the specific use case of uncertainty-aware market design and demonstrate that the proposed conic market brings advantages over existing alternatives within the linear programming market framework.
Despite recent mathematical & computational advances, electricity markets are still using a linear model, where simplifying assumptions are necessary. Shall we go beyond LP, and use a conic model? You may find this paper interesting to read: https://t.co/H13tQKZLzD@anubhavratha pic.twitter.com/LHRE6fOFqB
— Jalal Kazempour (@JalalKazempour) March 24, 2021
13. DeFLOCNet: Deep Image Editing via Flexible Low-level Controls
Hongyu Liu, Ziyu Wan, Wei Huang, Yibing Song, Xintong Han, Jing Liao, Bing Jiang, Wei Liu
User-intended visual content fills the hole regions of an input image in the image editing scenario. The coarse low-level inputs, which typically consist of sparse sketch lines and color dots, convey user intentions for content creation (\ie, free-form editing). While existing methods combine an input image and these low-level controls for CNN inputs, the corresponding feature representations are not sufficient to convey user intentions, leading to unfaithfully generated content. In this paper, we propose DeFLOCNet which relies on a deep encoder-decoder CNN to retain the guidance of these controls in the deep feature representations. In each skip-connection layer, we design a structure generation block. Instead of attaching low-level controls to an input image, we inject these controls directly into each structure generation block for sketch line refinement and color propagation in the CNN feature space. We then concatenate the modulated features with the original decoder features for structure generation. Meanwhile, DeFLOCNet involves another decoder branch for texture generation and detail enhancement. Both structures and textures are rendered in the decoder, leading to user-intended editing results. Experiments on benchmarks demonstrate that DeFLOCNet effectively transforms different user intentions to create visually pleasing content.
DeFLOCNet: Deep Image Editing via Flexible Low-level Controls
— AK (@ak92501) March 24, 2021
pdf: https://t.co/ex3LMpLQaF
abs: https://t.co/VGllcDYX9J pic.twitter.com/2dHjYMtGWb
14. BossNAS: Exploring Hybrid CNN-transformers with Block-wisely Self-supervised Neural Architecture Search
Changlin Li, Tao Tang, Guangrun Wang, Jiefeng Peng, Bing Wang, Xiaodan Liang, Xiaojun Chang
A myriad of recent breakthroughs in hand-crafted neural architectures for visual recognition have highlighted the urgent need to explore hybrid architectures consisting of diversified building blocks. Meanwhile, neural architecture search methods are surging with an expectation to reduce human efforts. However, whether NAS methods can efficiently and effectively handle diversified search spaces with disparate candidates (e.g. CNNs and transformers) is still an open question. In this work, we present Block-wisely Self-supervised Neural Architecture Search (BossNAS), an unsupervised NAS method that addresses the problem of inaccurate architecture rating caused by large weight-sharing space and biased supervision in previous methods. More specifically, we factorize the search space into blocks and utilize a novel self-supervised training scheme, named ensemble bootstrapping, to train each block separately before searching them as a whole towards the population center. Additionally, we present HyTra search space, a fabric-like hybrid CNN-transformer search space with searchable down-sampling positions. On this challenging search space, our searched model, BossNet-T, achieves up to 82.2% accuracy on ImageNet, surpassing EfficientNet by 2.1% with comparable compute time. Moreover, our method achieves superior architecture rating accuracy with 0.78 and 0.76 Spearman correlation on the canonical MBConv search space with ImageNet and on NATS-Bench size search space with CIFAR-100, respectively, surpassing state-of-the-art NAS methods. Code and pretrained models are available at https://github.com/changlin31/BossNAS .
Star Wars Episode One fans 🤝 machine learning fans
— Miles Brundage (@Miles_Brundage) March 24, 2021
"BossNAS: Exploring Hybrid CNN-transformers with Block-wisely Self-supervised Neural Architecture Search," Li et al.: https://t.co/fkKZDs88Jw
15. Replacing Rewards with Examples: Example-Based Policy Search via Recursive Classification
Benjamin Eysenbach, Sergey Levine, Ruslan Salakhutdinov
In the standard Markov decision process formalism, users specify tasks by writing down a reward function. However, in many scenarios, the user is unable to describe the task in words or numbers, but can readily provide examples of what the world would look like if the task were solved. Motivated by this observation, we derive a control algorithm from first principles that aims to visit states that have a high probability of leading to successful outcomes, given only examples of successful outcome states. Prior work has approached similar problem settings in a two-stage process, first learning an auxiliary reward function and then optimizing this reward function using another reinforcement learning algorithm. In contrast, we derive a method based on recursive classification that eschews auxiliary reward functions and instead directly learns a value function from transitions and successful outcomes. Our method therefore requires fewer hyperparameters to tune and lines of code to debug. We show that our method satisfies a new data-driven Bellman equation, where examples take the place of the typical reward function term. Experiments show that our approach outperforms prior methods that learn explicit reward functions.
Replacing Rewards with Examples: Example-Based Policy Search via Recursive Classification
— AK (@ak92501) March 24, 2021
pdf: https://t.co/wXfTiamtEN
abs: https://t.co/Lk8b0qyJvO
project page: https://t.co/MURPUjuPda pic.twitter.com/aUUbRulUP1