1. StyleCLIP: Text-Driven Manipulation of StyleGAN Imagery
Or Patashnik, Zongze Wu, Eli Shechtman, Daniel Cohen-Or, Dani Lischinski
Inspired by the ability of StyleGAN to generate highly realistic images in a variety of domains, much recent work has focused on understanding how to use the latent spaces of StyleGAN to manipulate generated and real images. However, discovering semantically meaningful latent manipulations typically involves painstaking human examination of the many degrees of freedom, or an annotated collection of images for each desired manipulation. In this work, we explore leveraging the power of recently introduced Contrastive Language-Image Pre-training (CLIP) models in order to develop a text-based interface for StyleGAN image manipulation that does not require such manual effort. We first introduce an optimization scheme that utilizes a CLIP-based loss to modify an input latent vector in response to a user-provided text prompt. Next, we describe a latent mapper that infers a text-guided latent manipulation step for a given input image, allowing faster and more stable text-based manipulation. Finally, we present a method for mapping a text prompts to input-agnostic directions in StyleGAN’s style space, enabling interactive text-driven image manipulation. Extensive results and comparisons demonstrate the effectiveness of our approaches.
StyleCLIP: Text-Driven Manipulation of StyleGAN Imagery
— AK (@ak92501) April 1, 2021
pdf: https://t.co/20g03cMRYu
abs: https://t.co/XN4XjVX7Pi
github: https://t.co/6fxI9mDXUN pic.twitter.com/DNRHX3bmtk
2. Going deeper with Image Transformers
Hugo Touvron, Matthieu Cord, Alexandre Sablayrolles, Gabriel Synnaeve, Hervé Jégou
Transformers have been recently adapted for large scale image classification, achieving high scores shaking up the long supremacy of convolutional neural networks. However the optimization of image transformers has been little studied so far. In this work, we build and optimize deeper transformer networks for image classification. In particular, we investigate the interplay of architecture and optimization of such dedicated transformers. We make two transformers architecture changes that significantly improve the accuracy of deep transformers. This leads us to produce models whose performance does not saturate early with more depth, for instance we obtain 86.3% top-1 accuracy on Imagenet when training with no external data. Our best model establishes the new state of the art on Imagenet with Reassessed labels and Imagenet-V2 / match frequency, in the setting with no additional training data.
Going deeper with Image Transformers
— Aran Komatsuzaki (@arankomatsuzaki) April 1, 2021
Achieves the new SotA on Imagenet benchmarks with a deeper transformer architecture optimized for image classification.https://t.co/CgfBFYPSAy pic.twitter.com/q8cbEkP0Qv
Going deeper with Image Transformers
— AK (@ak92501) April 1, 2021
pdf: https://t.co/7JhBix6DPh
abs: https://t.co/zaWehUCNMc
"Our best model establishes the new state of the art on Imagenet with Reassessed labels and Imagenet-V2 / match frequency, in the setting with no additional training data. pic.twitter.com/M5IX7A566J
3. Using Artificial Intelligence to Shed Light on the Star of Biscuits: The Jaffa Cake
H. F. Stevance
- retweets: 2095, favorites: 141 (04/02/2021 10:43:09)
- links: abs | pdf
- astro-ph.IM | cs.AI | cs.LG
Before Brexit, one of the greatest causes of arguments amongst British families was the question of the nature of Jaffa Cakes. Some argue that their size and host environment (the biscuit aisle) should make them a biscuit in their own right. Others consider that their physical properties (e.g. they harden rather than soften on becoming stale) suggest that they are in fact cake. In order to finally put this debate to rest, we re-purpose technologies used to classify transient events. We train two classifiers (a Random Forest and a Support Vector Machine) on 100 recipes of traditional cakes and biscuits. Our classifiers have 95 percent and 91 percent accuracy respectively. Finally we feed two Jaffa Cake recipes to the algorithms and find that Jaffa Cakes are, without a doubt, cakes. Finally, we suggest a new theory as to why some believe Jaffa Cakes are biscuits.
🤡IT'S APRIL'S FOOLS TIME🤡
— Dr. Héloïse Stevance 🖤✨(she) (@Sydonahi) April 1, 2021
This paper is two years in the making so I hope you will enjoy it!
"Using Artificial Intelligence to Shed Light on the Star of Biscuits: The Jaffa Cake"https://t.co/IrrBQ59FB9
4. On the Origin of Species of Self-Supervised Learning
Samuel Albanie, Erika Lu, Joao F. Henriques
In the quiet backwaters of cs.CV, cs.LG and stat.ML, a cornucopia of new learning systems is emerging from a primordial soup of mathematics-learning systems with no need for external supervision. To date, little thought has been given to how these self-supervised learners have sprung into being or the principles that govern their continuing diversification. After a period of deliberate study and dispassionate judgement during which each author set their Zoom virtual background to a separate Galapagos island, we now entertain no doubt that each of these learning machines are lineal descendants of some older and generally extinct species. We make five contributions: (1) We gather and catalogue row-major arrays of machine learning specimens, each exhibiting heritable discriminative features; (2) We document a mutation mechanism by which almost imperceptible changes are introduced to the genotype of new systems, but their phenotype (birdsong in the form of tweets and vestigial plumage such as press releases) communicates dramatic changes; (3) We propose a unifying theory of self-supervised machine evolution and compare to other unifying theories on standard unifying theory benchmarks, where we establish a new (and unifying) state of the art; (4) We discuss the importance of digital biodiversity, in light of the endearingly optimistic Paris Agreement.
On the Origin of Species of Self-Supervised Learning
— Aran Komatsuzaki (@arankomatsuzaki) April 1, 2021
Proposes a unifying theory of self-supervised machine evolution and compares to other unifying theories on standard unifying theory benchmarks, where they establish a new (and unifying) SotA.https://t.co/3XFHlBIdfE pic.twitter.com/QrVC0TrrJ8
5. Learning Generalizable Robotic Reward Functions from “In-The-Wild” Human Videos
Annie S. Chen, Suraj Nair, Chelsea Finn
We are motivated by the goal of generalist robots that can complete a wide range of tasks across many environments. Critical to this is the robot’s ability to acquire some metric of task success or reward, which is necessary for reinforcement learning, planning, or knowing when to ask for help. For a general-purpose robot operating in the real world, this reward function must also be able to generalize broadly across environments, tasks, and objects, while depending only on on-board sensor observations (e.g. RGB images). While deep learning on large and diverse datasets has shown promise as a path towards such generalization in computer vision and natural language, collecting high quality datasets of robotic interaction at scale remains an open challenge. In contrast, “in-the-wild” videos of humans (e.g. YouTube) contain an extensive collection of people doing interesting tasks across a diverse range of settings. In this work, we propose a simple approach, Domain-agnostic Video Discriminator (DVD), that learns multitask reward functions by training a discriminator to classify whether two videos are performing the same task, and can generalize by virtue of learning from a small amount of robot data with a broad dataset of human videos. We find that by leveraging diverse human datasets, this reward function (a) can generalize zero shot to unseen environments, (b) generalize zero shot to unseen tasks, and (c) can be combined with visual model predictive control to solve robotic manipulation tasks on a real WidowX200 robot in an unseen environment from a single human demo.
How can robots generalize to new environments & tasks?
— Chelsea Finn (@chelseabfinn) April 1, 2021
We find that using in-the-wild videos of people can allow learned reward functions to do so!
Paper: https://t.co/afz2PWw0rT
Led by @_anniechen_, @SurajNair_1
🧵(1/5) pic.twitter.com/5BqpzVgK31
6. What do Indian Researchers download from Sci-Hub
Vivek Kumar Singh, Satya Swarup Srichandan, Sujit Bhattacharya
Recently three foreign academic publishers filed a case of copyright infringement against Sci-Hub and LibGen before the Delhi High Court and prayed for complete blocking these websites in India. In this context, this paper attempted to assess the impact that blocking of Sci-Hub may have on Indian research community. The download requests originating from India on a daily-basis are counted, geotagged and analysed by discipline, publisher, country and publication year etc. Results indicate that blocking Sci-Hub in India may actually hurt Indian research community in a significant way.
Why Sci-Hub should not be blocked in India- evidence from Sci-Hub access log analysis. See- https://t.co/wzJujSGY3E#scihub @SciResMatters @spf_in @arulscaria @rsidd120 @b_sujit1965 @OAIndia @openscience @SciHubUpdated @asia_open @ashwani_mahajan
— Vivek Singh (@vivekks12) April 1, 2021
7. Dual Contrastive Loss and Attention for GANs
Ning Yu, Guilin Liu, Aysegul Dundar, Andrew Tao, Bryan Catanzaro, Larry Davis, Mario Fritz
Generative Adversarial Networks (GANs) produce impressive results on unconditional image generation when powered with large-scale image datasets. Yet generated images are still easy to spot especially on datasets with high variance (e.g. bedroom, church). In this paper, we propose various improvements to further push the boundaries in image generation. Specifically, we propose a novel dual contrastive loss and show that, with this loss, discriminator learns more generalized and distinguishable representations to incentivize generation. In addition, we revisit attention and extensively experiment with different attention blocks in the generator. We find attention to be still an important module for successful image generation even though it was not used in the recent state-of-the-art models. Lastly, we study different attention architectures in the discriminator, and propose a reference attention mechanism. By combining the strengths of these remedies, we improve the compelling state-of-the-art Fr’{e}chet Inception Distance (FID) by at least 17.5% on several benchmark datasets. We obtain even more significant improvements on compositional synthetic scenes (up to 47.5% in FID).
Dual Contrastive Loss and Attention for GANs
— Aran Komatsuzaki (@arankomatsuzaki) April 1, 2021
Improve the SotA FID by at least 17.5% on several benchmark datasets by improving attention blocks and adding a novel dual contrastive loss.https://t.co/aUEaMQql01 pic.twitter.com/ugykdhSiw4
Dual Contrastive Loss and Attention for GANs
— AK (@ak92501) April 1, 2021
pdf: https://t.co/mHYdXWFRzp
abs: https://t.co/wwJfH0uXsR
"By combining the strengths of these remedies, we improve the compelling state-of-the-art Frechet Inception Distance (FID) by at least 17.5% on several benchmark datasets." pic.twitter.com/hvD83YZe4w
8. A Neighbourhood Framework for Resource-Lean Content Flagging
Sheikh Muhammad Sarwar, Dimitrina Zlatkova, Momchil Hardalov, Yoan Dinkov, Isabelle Augenstein, Preslav Nakov
We propose a novel interpretable framework for cross-lingual content flagging, which significantly outperforms prior work both in terms of predictive performance and average inference time. The framework is based on a nearest-neighbour architecture and is interpretable by design. Moreover, it can easily adapt to new instances without the need to retrain it from scratch. Unlike prior work, (i) we encode not only the texts, but also the labels in the neighbourhood space (which yields better accuracy), and (ii) we use a bi-encoder instead of a cross-encoder (which saves computation time). Our evaluation results on ten different datasets for abusive language detection in eight languages shows sizable improvements over the state of the art, as well as a speed-up at inference time.
Excited about our new preprint, presenting an efficient, effective & inherently interpretable framework for content flagging. This is the result of @zzz2aaa's internship @checkstep & work w. @didizlatkova @mhardalov Yoan Dinkov @preslav_nakov https://t.co/XMeZS7fSLM#NLProc pic.twitter.com/gRxAsbR7gi
— Isabelle Augenstein (@IAugenstein) April 1, 2021
9. Seasonal Contrast: Unsupervised Pre-Training from Uncurated Remote Sensing Data
Oscar Mañas, Alexandre Lacoste, Xavier Giro-i-Nieto, David Vazquez, Pau Rodriguez
Remote sensing and automatic earth monitoring are key to solve global-scale challenges such as disaster prevention, land use monitoring, or tackling climate change. Although there exist vast amounts of remote sensing data, most of it remains unlabeled and thus inaccessible for supervised learning algorithms. Transfer learning approaches can reduce the data requirements of deep learning algorithms. However, most of these methods are pre-trained on ImageNet and their generalization to remote sensing imagery is not guaranteed due to the domain gap. In this work, we propose Seasonal Contrast (SeCo), an effective pipeline to leverage unlabeled data for in-domain pre-training of re-mote sensing representations. The SeCo pipeline is com-posed of two parts. First, a principled procedure to gather large-scale, unlabeled and uncurated remote sensing datasets containing images from multiple Earth locations at different timestamps. Second, a self-supervised algorithm that takes advantage of time and position invariance to learn transferable representations for re-mote sensing applications. We empirically show that models trained with SeCo achieve better performance than their ImageNet pre-trained counterparts and state-of-the-art self-supervised learning methods on multiple downstream tasks. The datasets and models in SeCo will be made public to facilitate transfer learning and enable rapid progress in re-mote sensing applications.
Are you using ImageNet pre-training on😺🐶 for satellite imagery? Seasonal Contrast (SeCO) does better with self-supervised pre-training on unlabeled 🛰️ data w/ temporal changes! Done during @oscmansan internship @element_ai @servicenow 🔗https://t.co/Z3kGy2rDUo
— Pau Rodríguez López (@prlz77) April 1, 2021
Public code soon! pic.twitter.com/3kRt486owK
10. Semi-supervised Synthesis of High-Resolution Editable Textures for 3D Humans
Bindita Chaudhuri, Nikolaos Sarafianos, Linda Shapiro, Tony Tung
We introduce a novel approach to generate diverse high fidelity texture maps for 3D human meshes in a semi-supervised setup. Given a segmentation mask defining the layout of the semantic regions in the texture map, our network generates high-resolution textures with a variety of styles, that are then used for rendering purposes. To accomplish this task, we propose a Region-adaptive Adversarial Variational AutoEncoder (ReAVAE) that learns the probability distribution of the style of each region individually so that the style of the generated texture can be controlled by sampling from the region-specific distributions. In addition, we introduce a data generation technique to augment our training set with data lifted from single-view RGB inputs. Our training strategy allows the mixing of reference image styles with arbitrary styles for different regions, a property which can be valuable for virtual try-on AR/VR applications. Experimental results show that our method synthesizes better texture maps compared to prior work while enabling independent layout and style controllability.
Semi-supervised Synthesis of High-Resolution Editable Textures for 3D Humans
— AK (@ak92501) April 1, 2021
pdf: https://t.co/yBOSW4VWFE
abs: https://t.co/kHsMxdnMPk
project page: https://t.co/3w20O67cmY pic.twitter.com/b1S0wvQ5yq
11. Rethinking Style Transfer: From Pixels to Parameterized Brushstrokes
Dmytro Kotovenko, Matthias Wright, Arthur Heimbrecht, Björn Ommer
There have been many successful implementations of neural style transfer in recent years. In most of these works, the stylization process is confined to the pixel domain. However, we argue that this representation is unnatural because paintings usually consist of brushstrokes rather than pixels. We propose a method to stylize images by optimizing parameterized brushstrokes instead of pixels and further introduce a simple differentiable rendering mechanism. Our approach significantly improves visual quality and enables additional control over the stylization process such as controlling the flow of brushstrokes through user input. We provide qualitative and quantitative evaluations that show the efficacy of the proposed parameterized representation.
Rethinking Style Transfer: From Pixels to Parameterized Brushstrokes
— AK (@ak92501) April 1, 2021
pdf: https://t.co/5XT1A34usy
abs: https://t.co/6uVWXT2iSO pic.twitter.com/srBUvXioZR
Rethinking Style Transfer: From Pixels to Parameterized Brushstrokes
— Aran Komatsuzaki (@arankomatsuzaki) April 1, 2021
Proposes a method to stylize images by optimizing parameterized brushstrokes instead of pixels, which significantly improves visual quality.
abs: https://t.co/fYbG38oo9s
code: https://t.co/aSDtwaJNdD pic.twitter.com/OtR5VtDzFA
12. Trusted Artificial Intelligence: Towards Certification of Machine Learning Applications
Philip Matthias Winter, Sebastian Eder, Johannes Weissenböck, Christoph Schwald, Thomas Doms, Tom Vogt, Sepp Hochreiter, Bernhard Nessler
Artificial Intelligence is one of the fastest growing technologies of the 21st century and accompanies us in our daily lives when interacting with technical applications. However, reliance on such technical systems is crucial for their widespread applicability and acceptance. The societal tools to express reliance are usually formalized by lawful regulations, i.e., standards, norms, accreditations, and certificates. Therefore, the T”UV AUSTRIA Group in cooperation with the Institute for Machine Learning at the Johannes Kepler University Linz, proposes a certification process and an audit catalog for Machine Learning applications. We are convinced that our approach can serve as the foundation for the certification of applications that use Machine Learning and Deep Learning, the techniques that drive the current revolution in Artificial Intelligence. While certain high-risk areas, such as fully autonomous robots in workspaces shared with humans, are still some time away from certification, we aim to cover low-risk applications with our certification procedure. Our holistic approach attempts to analyze Machine Learning applications from multiple perspectives to evaluate and verify the aspects of secure software development, functional requirements, data quality, data protection, and ethics. Inspired by existing work, we introduce four criticality levels to map the criticality of a Machine Learning application regarding the impact of its decisions on people, environment, and organizations. Currently, the audit catalog can be applied to low-risk applications within the scope of supervised learning as commonly encountered in industry. Guided by field experience, scientific developments, and market demands, the audit catalog will be extended and modified accordingly.
Our white paper "Trusted AI: Towards Certification of Machine Learning Applications" is now available on arxiv: https://t.co/C73Jqz5S3m
— Philip M. Winter (@PhilipMWinter) April 1, 2021
13. VITON-HD: High-Resolution Virtual Try-On via Misalignment-Aware Normalization
Seunghwan Choi, Sunghyun Park, Minsoo Lee, Jaegul Choo
The task of image-based virtual try-on aims to transfer a target clothing item onto the corresponding region of a person, which is commonly tackled by fitting the item to the desired body part and fusing the warped item with the person. While an increasing number of studies have been conducted, the resolution of synthesized images is still limited to low (e.g., 256x192), which acts as the critical limitation against satisfying online consumers. We argue that the limitation stems from several challenges: as the resolution increases, the artifacts in the misaligned areas between the warped clothes and the desired clothing regions become noticeable in the final results; the architectures used in existing methods have low performance in generating high-quality body parts and maintaining the texture sharpness of the clothes. To address the challenges, we propose a novel virtual try-on method called VITON-HD that successfully synthesizes 1024x768 virtual try-on images. Specifically, we first prepare the segmentation map to guide our virtual try-on synthesis, and then roughly fit the target clothing item to a given person’s body. Next, we propose ALIgnment-Aware Segment (ALIAS) normalization and ALIAS generator to handle the misaligned areas and preserve the details of 1024x768 inputs. Through rigorous comparison with existing methods, we demonstrate that VITON-HD highly sur-passes the baselines in terms of synthesized image quality both qualitatively and quantitatively.
VITON-HD: High-Resolution Virtual Try-On via Misalignment-Aware Normalization
— AK (@ak92501) April 1, 2021
pdf: https://t.co/dXrLMTwYHq
abs: https://t.co/QrlQDYOZDk pic.twitter.com/XLiZIEi24X
14. Symmetric and antisymmetric kernels for machine learning problems in quantum physics and chemistry
Stefan Klus, Patrick Gelß, Feliks Nüske, Frank Noé
- retweets: 90, favorites: 87 (04/02/2021 10:43:12)
- links: abs | pdf
- quant-ph | math-ph | physics.chem-ph | stat.ML
We derive symmetric and antisymmetric kernels by symmetrizing and antisymmetrizing conventional kernels and analyze their properties. In particular, we compute the feature space dimensions of the resulting polynomial kernels, prove that the reproducing kernel Hilbert spaces induced by symmetric and antisymmetric Gaussian kernels are dense in the space of symmetric and antisymmetric functions, and propose a Slater determinant representation of the antisymmetric Gaussian kernel, which allows for an efficient evaluation even if the state space is high-dimensional. Furthermore, we show that by exploiting symmetries or antisymmetries the size of the training data set can be significantly reduced. The results are illustrated with guiding examples and simple quantum physics and chemistry applications.
Antisymmetry is what makes electronic structure calculations hard. We need more #MachineLearning work to make progress with this fundamental problem. Stefan Klus takes a stab at it by developing antisymmetric kernels:https://t.co/UOzW3w2k3c
— Frank Noe (@FrankNoeBerlin) April 1, 2021
15. Tracking Knowledge Propagation Across Wikipedia Languages
Roldolfo Valentim, Giovanni Comarela, Souneil Park, Diego Saez-Trumper
In this paper, we present a dataset of inter-language knowledge propagation in Wikipedia. Covering the entire 309 language editions and 33M articles, the dataset aims to track the full propagation history of Wikipedia concepts, and allow follow up research on building predictive models of them. For this purpose, we align all the Wikipedia articles in a language-agnostic manner according to the concept they cover, which results in 13M propagation instances. To the best of our knowledge, this dataset is the first to explore the full inter-language propagation at a large scale. Together with the dataset, a holistic overview of the propagation and key insights about the underlying structural factors are provided to aid future research. For example, we find that although long cascades are unusual, the propagation tends to continue further once it reaches more than four language editions. We also find that the size of language editions is associated with the speed of propagation. We believe the dataset not only contributes to the prior literature on Wikipedia growth but also enables new use cases such as edit recommendation for addressing knowledge gaps, detection of disinformation, and cultural relationship analysis.
Our dataset paper "Tracking Knowledge Propagation Across #Wikipedia Languages" accepted at @icwsm is now available
— Diego ST (@e__migrante) April 1, 2021
Topics and creation time for each article across all languages +model to predict content propagation in WP
Paper:https://t.co/xUTxyiEJ67
Data:https://t.co/ShGZi9MREg pic.twitter.com/MmAtlJuu2E
16. BASE Layers: Simplifying Training of Large, Sparse Models
Mike Lewis, Shruti Bhosale, Tim Dettmers, Naman Goyal, Luke Zettlemoyer
We introduce a new balanced assignment of experts (BASE) layer for large language models that greatly simplifies existing high capacity sparse layers. Sparse layers can dramatically improve the efficiency of training and inference by routing each token to specialized expert modules that contain only a small fraction of the model parameters. However, it can be difficult to learn balanced routing functions that make full use of the available experts; existing approaches typically use routing heuristics or auxiliary expert-balancing loss functions. In contrast, we formulate token-to-expert allocation as a linear assignment problem, allowing an optimal assignment in which each expert receives an equal number of tokens. This optimal assignment scheme improves efficiency by guaranteeing balanced compute loads, and also simplifies training by not requiring any new hyperparameters or auxiliary losses. Code is publicly released at https://github.com/pytorch/fairseq/
BASE Layers: Simplifying Training of Large, Sparse Models
— Aran Komatsuzaki (@arankomatsuzaki) April 1, 2021
Simplifies the loading of sparse MoE by formulating token-to-expert allocation as a linear assignment problem, which outperforms Switch Transformer.https://t.co/RneE13TCEu pic.twitter.com/FSJNwXysZy
17. Rethinking Self-supervised Correspondence Learning: A Video Frame-level Similarity Perspective
Jiarui Xu, Xiaolong Wang
Learning a good representation for space-time correspondence is the key for various computer vision tasks, including tracking object bounding boxes and performing video object pixel segmentation. To learn generalizable representation for correspondence in large-scale, a variety of self-supervised pretext tasks are proposed to explicitly perform object-level or patch-level similarity learning. Instead of following the previous literature, we propose to learn correspondence using Video Frame-level Similarity (VFS) learning, i.e, simply learning from comparing video frames. Our work is inspired by the recent success in image-level contrastive learning and similarity learning for visual recognition. Our hypothesis is that if the representation is good for recognition, it requires the convolutional features to find correspondence between similar objects or parts. Our experiments show surprising results that VFS surpasses state-of-the-art self-supervised approaches for both OTB visual object tracking and DAVIS video object segmentation. We perform detailed analysis on what matters in VFS and reveals new properties on image and frame level similarity learning. Project page is available at https://jerryxu.net/VFS.
Rethinking Self-supervised Correspondence Learning: A Video Frame-level Similarity Perspective
— AK (@ak92501) April 1, 2021
pdf: https://t.co/pGyVepM7Eg
abs: https://t.co/tcIqTXOfqq
project page: https://t.co/TCMn90RbDs pic.twitter.com/jxwnGDKAOI
18. Learning Spatio-Temporal Transformer for Visual Tracking
Bin Yan, Houwen Peng, Jianlong Fu, Dong Wang, Huchuan Lu
In this paper, we present a new tracking architecture with an encoder-decoder transformer as the key component. The encoder models the global spatio-temporal feature dependencies between target objects and search regions, while the decoder learns a query embedding to predict the spatial positions of the target objects. Our method casts object tracking as a direct bounding box prediction problem, without using any proposals or predefined anchors. With the encoder-decoder transformer, the prediction of objects just uses a simple fully-convolutional network, which estimates the corners of objects directly. The whole method is end-to-end, does not need any postprocessing steps such as cosine window and bounding box smoothing, thus largely simplifying existing tracking pipelines. The proposed tracker achieves state-of-the-art performance on five challenging short-term and long-term benchmarks, while running at real-time speed, being 6x faster than Siam R-CNN. Code and models are open-sourced at https://github.com/researchmm/Stark.
Learning Spatio-Temporal Transformer for Visual Tracking
— AK (@ak92501) April 1, 2021
pdf: https://t.co/OMXCIhMHxm
abs: https://t.co/2JDwHTwI95
github: https://t.co/RwBZaWVcRm
proposed tracker achieves SOTA performance on five challenging short-term and long-term benchmarks, while running at real-time speed pic.twitter.com/KzoFKJNfbJ
19. ReMix: Towards Image-to-Image Translation with Limited Data
Jie Cao, Luanxuan Hou, Ming-Hsuan Yang, Ran He, Zhenan Sun
Image-to-image (I2I) translation methods based on generative adversarial networks (GANs) typically suffer from overfitting when limited training data is available. In this work, we propose a data augmentation method (ReMix) to tackle this issue. We interpolate training samples at the feature level and propose a novel content loss based on the perceptual relations among samples. The generator learns to translate the in-between samples rather than memorizing the training set, and thereby forces the discriminator to generalize. The proposed approach effectively reduces the ambiguity of generation and renders content-preserving results. The ReMix method can be easily incorporated into existing GAN models with minor modifications. Experimental results on numerous tasks demonstrate that GAN models equipped with the ReMix method achieve significant improvements.
ReMix: Towards Image-to-Image Translation with Limited Data
— AK (@ak92501) April 1, 2021
pdf: https://t.co/PQn8c8wH23
abs: https://t.co/jP8Wmz0IsY pic.twitter.com/e5f9UImQ9Z
20. A genuinely natural information measure
Andreas Winter
The theoretical measuring of information was famously initiated by Shannon in his mathematical theory of communication, in which he proposed a now widely used quantity, the entropy, measured in bits. Yet, in the same paper, Shannon also chose to measure the information in continuous systems in nats, which differ from bits by the use of the natural rather than the binary logarithm. We point out that there is nothing natural about the choice of logarithm basis, rather it is arbitrary. We remedy this problematic state of affairs by proposing a genuinely natural measure of information, which we dub gnats. We show that gnats have many advantages in information theory, and propose to adopt the underlying methodology throughout science, arts and everyday life.
Today's preprint "A genuinely natural information measure" (https://t.co/4xSUL0YenQ) by Andreas Winter from @GIQ_BCN, may (or may not) revolutionise information theory. pic.twitter.com/pbm96X31JQ
— Quantum Info at UAB (@GIQ_BCN) April 1, 2021