1. Exposing GAN-generated Faces Using Inconsistent Corneal Specular Highlights
Shu Hu, Yuezun Li, Siwei Lyu
Sophisticated generative adversary network (GAN) models are now able to synthesize highly realistic human faces that are difficult to discern from real ones visually. GAN synthesized faces have become a new form of online disinformation. In this work, we show that GAN synthesized faces can be exposed with the inconsistent corneal specular highlights between two eyes. We show that such artifacts exist widely and further describe a method to extract and compare corneal specular highlights from two eyes. Qualitative and quantitative evaluations of our method suggest its simplicity and effectiveness in distinguishing GAN synthesized faces.
Exposing GAN-generated Faces Using Inconsistent Corneal Specular Highlights
— AK (@ak92501) September 28, 2020
pdf: https://t.co/oG22tkhrMe
abs: https://t.co/211MYSOmia pic.twitter.com/1vkG21B9fH
2. Weird AI Yankovic: Generating Parody Lyrics
Mark Riedl
Lyrics parody swaps one set of words that accompany a melody with a new set of words, preserving the number of syllables per line and the rhyme scheme. Lyrics parody generation is a challenge for controllable text generation. We show how a specialized sampling procedure, combined with backward text generation with XLNet can produce parody lyrics that reliably meet the syllable and rhyme scheme constraints.We introduce the Weird AI Yankovic system and provide a case study evaluation. We conclude with societal implications of neural lyric parody generation.
Weird AI Yankovic: Generating Parody Lyrics
— AK (@ak92501) September 28, 2020
pdf: https://t.co/ol5Q37agU6
abs: https://t.co/ra8ucUbURt pic.twitter.com/QGFLMyQGrP
3. G-SimCLR : Self-Supervised Contrastive Learning with Guided Projection via Pseudo Labelling
Souradip Chakraborty, Aritra Roy Gosthipaty, Sayak Paul
In the realms of computer vision, it is evident that deep neural networks perform better in a supervised setting with a large amount of labeled data. The representations learned with supervision are not only of high quality but also helps the model in enhancing its accuracy. However, the collection and annotation of a large dataset are costly and time-consuming. To avoid the same, there has been a lot of research going on in the field of unsupervised visual representation learning especially in a self-supervised setting. Amongst the recent advancements in self-supervised methods for visual recognition, in SimCLR Chen et al. shows that good quality representations can indeed be learned without explicit supervision. In SimCLR, the authors maximize the similarity of augmentations of the same image and minimize the similarity of augmentations of different images. A linear classifier trained with the representations learned using this approach yields 76.5% top-1 accuracy on the ImageNet ILSVRC-2012 dataset. In this work, we propose that, with the normalized temperature-scaled cross-entropy (NT-Xent) loss function (as used in SimCLR), it is beneficial to not have images of the same category in the same batch. In an unsupervised setting, the information of images pertaining to the same category is missing. We use the latent space representation of a denoising autoencoder trained on the unlabeled dataset and cluster them with k-means to obtain pseudo labels. With this apriori information we batch images, where no two images from the same category are to be found. We report comparable performance enhancements on the CIFAR10 dataset and a subset of the ImageNet dataset. We refer to our method as G-SimCLR.
The preprint of this work is now available at https://t.co/7TsnndMBlP. @ariG23498 @SOURADIPCHAKR18 https://t.co/hqynTovYMX
— Sayak Paul (@RisingSayak) September 28, 2020
4. Flexible Performant GEMM Kernels on GPU
Thomas Faingnaert, Tim Besard, Bjorn De Sutter
General Matrix Multiplication or GEMM kernels take center place in high performance computing and machine learning. Recent NVIDIA GPUs include GEMM accelerators, such as NVIDIA’s Tensor Cores. Their exploitation is hampered by the two-language problem: it requires either low-level programming which implies low programmer productivity or using libraries that only offer a limited set of components. Because rephrasing algorithms in terms of established components often introduces overhead, the libraries’ lack of flexibility limits the freedom to explore new algorithms. Researchers using GEMMs can hence not enjoy programming productivity, high performance, and research flexibility at once. In this paper we solve this problem. We present three sets of abstractions and interfaces to program GEMMs within the scientific Julia programming language. The interfaces and abstractions are co-designed for researchers’ needs and Julia’s features to achieve sufficient separation of concerns and flexibility to easily extend basic GEMMs in many different ways without paying a performance price. Comparing our GEMMs to state-of-the-art libraries cuBLAS and CUTLASS, we demonstrate that our performance is mostly on par with, and in some cases even exceeds, the libraries, without having to write a single line of code in CUDA C++ or assembly, and without facing flexibility limitations.
Researchers have developed support for Nvidia Tensor Cores in the Julia compiler and libraries through a WMMA API of wrapper functions around compiler intrinsics.https://t.co/kH8WmrBsUw pic.twitter.com/nLhM6lSvTG
— Underfox (@Underfox3) September 28, 2020
5. robosuite: A Modular Simulation Framework and Benchmark for Robot Learning
Yuke Zhu, Josiah Wong, Ajay Mandlekar, Roberto Martín-Martín
robosuite is a simulation framework for robot learning powered by the MuJoCo physics engine. It offers a modular design for creating robotic tasks as well as a suite of benchmark environments for reproducible research. This paper discusses the key system modules and the benchmark environments of our new release robosuite v1.0.
robosuite: A Modular Simulation Framework and Benchmark for Robot Learninghttps://t.co/ZbHj9oYSFp pic.twitter.com/esyujukE8W
— sim2real (@sim2realAIorg) September 28, 2020