1. The power of quantum neural networks
Amira Abbas, David Sutter, Christa Zoufal, Aurélien Lucchi, Alessio Figalli, Stefan Woerner
Fault-tolerant quantum computers offer the promise of dramatically improving machine learning through speed-ups in computation or improved model scalability. In the near-term, however, the benefits of quantum machine learning are not so clear. Understanding expressibility and trainability of quantum models-and quantum neural networks in particular-requires further investigation. In this work, we use tools from information geometry to define a notion of expressibility for quantum and classical models. The effective dimension, which depends on the Fisher information, is used to prove a novel generalisation bound and establish a robust measure of expressibility. We show that quantum neural networks are able to achieve a significantly better effective dimension than comparable classical neural networks. To then assess the trainability of quantum models, we connect the Fisher information spectrum to barren plateaus, the problem of vanishing gradients. Importantly, certain quantum neural networks can show resilience to this phenomenon and train faster than classical models due to their favourable optimisation landscapes, captured by a more evenly spread Fisher information spectrum. Our work is the first to demonstrate that well-designed quantum neural networks offer an advantage over classical neural networks through a higher effective dimension and faster training ability, which we verify on real quantum hardware.
7 months of my life summarised in one arXiv submission! I’m so happy to share our work on the power of quantum neural networks, alongside an amazing team @quantum_sutter @AFigalli @AurelienLucchi Christa Zoufal and Stefan Woerner from @IBMResearch/@ETH_en https://t.co/Z0kMP7Jg6C pic.twitter.com/rEZx9BI3Qr
— Amira Abbas (@AmiraMorphism) November 3, 2020
2. Do 2D GANs Know 3D Shape? Unsupervised 3D shape reconstruction from 2D Image GANs
Xingang Pan, Bo Dai, Ziwei Liu, Chen Change Loy, Ping Luo
Natural images are projections of 3D objects on a 2D image plane. While state-of-the-art 2D generative models like GANs show unprecedented quality in modeling the natural image manifold, it is unclear whether they implicitly capture the underlying 3D object structures. And if so, how could we exploit such knowledge to recover the 3D shapes of objects in the images? To answer these questions, in this work, we present the first attempt to directly mine 3D geometric clues from an off-the-shelf 2D GAN that is trained on RGB images only. Through our investigation, we found that such a pre-trained GAN indeed contains rich 3D knowledge and thus can be used to recover 3D shape from a single 2D image in an unsupervised manner. The core of our framework is an iterative strategy that explores and exploits diverse viewpoint and lighting variations in the GAN image manifold. The framework does not require 2D keypoint or 3D annotations, or strong assumptions on object shapes (e.g. shapes are symmetric), yet it successfully recovers 3D shapes with high precision for human faces, cats, cars, and buildings. The recovered 3D shapes immediately allow high-quality image editing like relighting and object rotation. We quantitatively demonstrate the effectiveness of our approach compared to previous methods in both 3D shape reconstruction and face rotation. Our code and models will be released at https://github.com/XingangPan/GAN2Shape.
Excited to share our work GAN2Shape, which reconstructs 3D shapes of monocular images using 2D image GANs in an unsupervised manner. No manual annotation or external 3D model is needed! It seems 2D GANs already capture comprehensive 3D knowledge!
— Xingang Pan (@XingangP) November 3, 2020
Paper: https://t.co/eYnKITGDpl pic.twitter.com/K5IUIfsxKJ
3. Identifying Exoplanets with Deep Learning. IV. Removing Stellar Activity Signals from Radial Velocity Measurements Using Neural Networks
Zoe L. de Beurs, Andrew Vanderburg, Christopher J. Shallue, Xavier Dumusque, Andrew Collier Cameron, Lars A. Buchhave, Rosario Cosentino, Adriano Ghedina, Raphaëlle D. Haywood, Nicholas Langellier, David W. Latham, Mercedes López-Morales, Michel Mayor, Giusi Micela, Timothy W. Milbourne, Annelies Mortier, Emilio Molinari, Francesco Pepe, David F. Phillips, Matteo Pinamonti, Giampaolo Piotto, Ken Rice, Dimitar Sasselov, Alessandro Sozzetti, Stéphane Udry, Christopher A. Watson
- retweets: 450, favorites: 174 (11/04/2020 09:07:06)
- links: abs | pdf
- astro-ph.EP | astro-ph.IM | astro-ph.SR | cs.LG
Exoplanet detection with precise radial velocity (RV) observations is currently limited by spurious RV signals introduced by stellar activity. We show that machine learning techniques such as linear regression and neural networks can effectively remove the activity signals (due to starspots/faculae) from RV observations. Previous efforts focused on carefully filtering out activity signals in time using modeling techniques like Gaussian Process regression (e.g. Haywood et al. 2014). Instead, we systematically remove activity signals using only changes to the average shape of spectral lines, and no information about when the observations were collected. We trained our machine learning models on both simulated data (generated with the SOAP 2.0 software; Dumusque et al. 2014) and observations of the Sun from the HARPS-N Solar Telescope (Dumusque et al. 2015; Phillips et al. 2016; Collier Cameron et al. 2019). We find that these techniques can predict and remove stellar activity from both simulated data (improving RV scatter from 82 cm/s to 3 cm/s) and from more than 600 real observations taken nearly daily over three years with the HARPS-N Solar Telescope (improving the RV scatter from 1.47 m/s to 0.78 m/s, a factor of ~ 1.9 improvement). In the future, these or similar techniques could remove activity signals from observations of stars outside our solar system and eventually help detect habitable-zone Earth-mass exoplanets around Sun-like stars.
Excited to announce my first first-author paper, which demonstrates that neural networks can remove stellar activity noise from solar radial velocities, and could eventually help detect habitable-zone Earth-mass exoplanets around Sun-like stars. https://t.co/fMWM6PBNmh pic.twitter.com/QicUMdZL4j
— Zoe de Beurs (@AstroZo2o) November 3, 2020
Exciting new paper from Zoe de Beurs (@AstroZo2o, undergraduate at UT Austin working with me, Chris Shallue, and the HARPS-N team) on correcting stellar activity in radial velocity observations! https://t.co/DdbmAAJyYn
— Andrew Vanderburg (@amvanderburg) November 3, 2020
4. Tinker-HP : Accelerating Molecular Dynamics Simulations of Large Complex Systems with Advanced Point Dipole Polarizable Force Fields using GPUs and Multi-GPUs systems
Olivier Adjoua, Louis Lagardère, Luc-Henri Jolly, Arnaud Durocher, Thibaut Very, Isabelle Dupays, Zhi Wang, Théo Jaffrelot Inizan, Frédéric Célerse, Pengyu Ren, Jay Ponder, Jean-Philip Piquemal
- retweets: 384, favorites: 63 (11/04/2020 09:07:06)
- links: abs | pdf
- physics.comp-ph | cs.DC | cs.MS | physics.chem-ph
We present the extension of the Tinker-HP package (Lagard`ere et al., Chem. Sci., 2018,9, 956-972) to the use of Graphics Processing Unit (GPU) cards to accelerate molecular dynamics simulations using polarizable many-body force fields. The new high-performance module allows for an efficient use of single- and multi-GPU architectures ranging from research laboratories to modern pre-exascale supercomputer centers. After detailing an analysis of our general scalable strategy that relies on OpenACC and CUDA, we discuss the various capabilities of the package. Among them, the multi-precision possibilities of the code are discussed. If an efficient double precision implementation is provided to preserve the possibility of fast reference computations, we show that a lower precision arithmetic is preferred providing a similar accuracy for molecular dynamics while exhibiting superior performances. As Tinker-HP is mainly dedicated to accelerate simulations using new generation point dipole polarizable force field, we focus our study on the implementation of the AMOEBA model and provide illustrative benchmarks of the code for single- and multi-cards simulations on large biosystems encompassing up to millions of atoms.The new code strongly reduces time to solution and offers the best performances ever obtained using the AMOEBA polarizable force field. Perspectives toward the strong-scaling performance of our multi-node massive parallelization strategy, unsupervised adaptive sampling and large scale applicability of the Tinker-HP code in biophysics are discussed. The present software has been released in phase advance on GitHub in link with the High Performance Computing community COVID-19 research efforts and is free for Academics (see https://github.com/TinkerTools/tinker-hp).
Our last #Preprint : Tinker-HP : Accelerating Molecular Dynamics Simulations of Large Complex Systems with Advanced Point Dipole Polarizable Force Fields using GPUs and Multi-GPUs systems. #compchem #HPC #supercomputing #GPU @TINKERtoolsMD https://t.co/JfCttVWdye pic.twitter.com/TnqyghFrNQ
— Jean-Philip Piquemal (@jppiquem) November 3, 2020
5. Automated Transcription of Non-Latin Script Periodicals: A Case Study in the Ottoman Turkish Print Archive
Suphan Kirmizialtin, David Wrisley
Our study utilizes deep learning methods for the automated transcription of late nineteenth- and early twentieth-century periodicals written in Arabic script Ottoman Turkish (OT) using the Transkribus platform. We discuss the historical situation of OT text collections and how they were excluded for the most part from the late twentieth century corpora digitization that took place in many Latin script languages. This exclusion has two basic reasons: the technical challenges of OCR for Arabic script languages, and the rapid abandonment of that very script in the Turkish historical context. In the specific case of OT, opening periodical collections to digital tools require training HTR models to generate transcriptions in the Latin writing system of contemporary readers of Turkish, and not, as some may expect, in right-to-left Arabic script text. In the paper we discuss the challenges of training such models where one-to-one correspondence between the writing systems do not exist, and we report results based on our HTR experiments with two OT periodicals from the early twentieth century. Finally, we reflect on potential domain bias of HTR models in historical languages exhibiting spatio-temporal variance as well as the significance of working between writing systems for language communities that have experienced language reform and script change.
Interested in OCR/HTR and writing systems with @Transkribus. Our paper “Automated Transcription of Non-Latin Script Periodicals” is up https://t.co/k6DjlMKF9H @suphan76069481
— DJ Wrisley (@DJWrisley) November 3, 2020
6. The Journal Coverage of Web of Science, Scopus and Dimensions: A Comparative Analysis
Vivek Kumar Singh, Prashasti Singh, Mousumi Karmakar, Jacqueline Leta, Philipp Mayr
Traditionally, Web of Science and Scopus have been the two most widely used databases for bibliometric analyses. However, during the last few years some new scholarly databases, such as Dimensions, have come up. Several previous studies have compared different databases, either through a direct comparison of article coverage or by comparing the citations across the databases. This article attempts to compare the journal coverage of the three databases: Web of Science, Scopus and Dimensions. The most recent master journal lists of the three databases have been used for the purpose of identifying the overlapping and unique journals covered in the databases. The results indicate that the databases have significantly different journal coverage, with the Web of Science being most selective and Dimensions being the most exhaustive. About 99.11% and 96.61% of the journals indexed in Web of Science are also indexed in Scopus and Dimensions, respectively. Scopus has 96.42% of its indexed journals also covered by Dimensions. Dimensions database has the most exhaustive coverage, with 82.22% more journals covered as compared to Web of Science and 48.17% more journals covered as compared to Scopus. We also analysed the research outputs for 20 highly productive countries for the 2010-2019 period, as indexed in the three databases, and identified database-induced variations in research output volume, rank and global share of different countries. In addition to variations in overall coverage of research output from different countries, the three databases appear to have differential coverage of different disciplines.
Pre-print of The journal coverage of Web of Science, Scopus and Dimensions: A Comparative Analysis. An updated comparison of WoS & Scopus and the 1st study to include journal coverage of Dimensions. See: https://t.co/WHkySP9LHc@Philipp_Mayr @webofscience @Scopus @DSDimensions
— Vivek Singh (@vivekks12) November 3, 2020
7. Continuous and Diverse Image-to-Image Translation via Signed Attribute Vectors
Qi Mao, Hsin-Ying Lee, Hung-Yu Tseng, Jia-Bin Huang, Siwei Ma, Ming-Hsuan Yang
Recent image-to-image (I2I) translation algorithms focus on learning the mapping from a source to a target domain. However, the continuous translation problem that synthesizes intermediate results between the two domains has not been well-studied in the literature. Generating a smooth sequence of intermediate results bridges the gap of two different domains, facilitating the morphing effect across domains. Existing I2I approaches are limited to either intra-domain or deterministic inter-domain continuous translation. In this work, we present an effective signed attribute vector, which enables continuous translation on diverse mapping paths across various domains. In particular, utilizing the sign operation to encode the domain information, we introduce a unified attribute space shared by all domains, thereby allowing the interpolation on attribute vectors of different domains. To enhance the visual quality of continuous translation results, we generate a trajectory between two sign-symmetrical attribute vectors and leverage the domain information of the interpolated results along the trajectory for adversarial training. We evaluate the proposed method on a wide range of I2I translation tasks. Both qualitative and quantitative results demonstrate that the proposed framework generates more high-quality continuous translation results against the state-of-the-art methods.
Continuous and Diverse Image-to-Image Translation via Signed Attribute Vectors
— AK (@ak92501) November 3, 2020
pdf: https://t.co/uuRXkxD1hl
abs: https://t.co/6UtCniOBkg
github: https://t.co/hEZTDGIgQC
project page: https://t.co/Ok5FAqNq59 pic.twitter.com/8o3gJmA5RB
8. Deep Reactive Planning in Dynamic Environments
Kei Ota, Devesh K. Jha, Tadashi Onishi, Asako Kanezaki, Yusuke Yoshiyasu, Yoko Sasaki, Toshisada Mariyama, Daniel Nikovski
The main novelty of the proposed approach is that it allows a robot to learn an end-to-end policy which can adapt to changes in the environment during execution. While goal conditioning of policies has been studied in the RL literature, such approaches are not easily extended to cases where the robot’s goal can change during execution. This is something that humans are naturally able to do. However, it is difficult for robots to learn such reflexes (i.e., to naturally respond to dynamic environments), especially when the goal location is not explicitly provided to the robot, and instead needs to be perceived through a vision sensor. In the current work, we present a method that can achieve such behavior by combining traditional kinematic planning, deep learning, and deep reinforcement learning in a synergistic fashion to generalize to arbitrary environments. We demonstrate the proposed approach for several reaching and pick-and-place tasks in simulation, as well as on a real system of a 6-DoF industrial manipulator.
CoRL採択論文が公開されました!
— Kei Ohta (@ohtake_i) November 3, 2020
動的に変化する環境において障害物を避けつつ目標姿勢に到達する軌道最適化問題を画像変換・経路生成・教師有学習・強化学習の組合せで解いています。@kanejaki 先生、産総研、MERLとの共同研究です。
論文:https://t.co/wSzWUT6Uoh
動画:https://t.co/nhyTONJJcG pic.twitter.com/ywhMtGR4XL
9. Reducing Confusion in Active Learning for Part-Of-Speech Tagging
Aditi Chaudhary, Antonios Anastasopoulos, Zaid Sheikh, Graham Neubig
Active learning (AL) uses a data selection algorithm to select useful training samples to minimize annotation cost. This is now an essential tool for building low-resource syntactic analyzers such as part-of-speech (POS) taggers. Existing AL heuristics are generally designed on the principle of selecting uncertain yet representative training instances, where annotating these instances may reduce a large number of errors. However, in an empirical study across six typologically diverse languages (German, Swedish, Galician, North Sami, Persian, and Ukrainian), we found the surprising result that even in an oracle scenario where we know the true uncertainty of predictions, these current heuristics are far from optimal. Based on this analysis, we pose the problem of AL as selecting instances which maximally reduce the confusion between particular pairs of output tags. Extensive experimentation on the aforementioned languages shows that our proposed AL strategy outperforms other AL strategies by a significant margin. We also present auxiliary results demonstrating the importance of proper calibration of models, which we ensure through cross-view training, and analysis demonstrating how our proposed strategy selects examples that more closely follow the oracle data distribution.
Very excited to share our #TACL work https://t.co/DByO9im2J3 done with @anas_ant, Zaid Sheikh and @gneubig! We pose the problem of active learning for POS tagging as selecting instances which maximally reduce the confusion between particular pairs of output tags. pic.twitter.com/RoRCLRreuR
— Aditi Chaudhary (@AditiC123) November 3, 2020
10. Optimizing Mixed Autonomy Traffic Flow With Decentralized Autonomous Vehicles and Multi-Agent RL
Eugene Vinitsky, Nathan Lichtle, Kanaad Parvate, Alexandre Bayen
We study the ability of autonomous vehicles to improve the throughput of a bottleneck using a fully decentralized control scheme in a mixed autonomy setting. We consider the problem of improving the throughput of a scaled model of the San Francisco-Oakland Bay Bridge: a two-stage bottleneck where four lanes reduce to two and then reduce to one. Although there is extensive work examining variants of bottleneck control in a centralized setting, there is less study of the challenging multi-agent setting where the large number of interacting AVs leads to significant optimization difficulties for reinforcement learning methods. We apply multi-agent reinforcement algorithms to this problem and demonstrate that significant improvements in bottleneck throughput, from 20% at a 5% penetration rate to 33% at a 40% penetration rate, can be achieved. We compare our results to a hand-designed feedback controller and demonstrate that our results sharply outperform the feedback controller despite extensive tuning. Additionally, we demonstrate that the RL-based controllers adopt a robust strategy that works across penetration rates whereas the feedback controllers degrade immediately upon penetration rate variation. We investigate the feasibility of both action and observation decentralization and demonstrate that effective strategies are possible using purely local sensing. Finally, we open-source our code at https://github.com/eugenevinitsky/decentralized_bottlenecks.
New preprint https://t.co/yY1pAJavof:
— Eugene Vinitsky (@EugeneVinitsky) November 3, 2020
Got a few level-2 autonomous vehicles with cruise control + radars but no coordinating infrastructure? We can use multi-agent RL to train effective, decentralized bottleneck optimization schemes.
(w/ N. Lichtle, @kanaad, @alexandrebayen) 1/ pic.twitter.com/5NCPjsAWO0
11. Liputan6: A Large-scale Indonesian Dataset for Text Summarization
Fajri Koto, Jey Han Lau, Timothy Baldwin
In this paper, we introduce a large-scale Indonesian summarization dataset. We harvest articles from Liputan6.com, an online news portal, and obtain 215,827 document-summary pairs. We leverage pre-trained language models to develop benchmark extractive and abstractive summarization methods over the dataset with multilingual and monolingual BERT-based models. We include a thorough error analysis by examining machine-generated summaries that have low ROUGE scores, and expose both issues with ROUGE it-self, as well as with extractive and abstractive summarization models.
To appear at #aacl2020, w the amazing Fajri Koto and Jey Han Lau ... Liputan6: A Large-scale Indonesian Dataset for Text Summarization https://t.co/tpOK5WEZl0 -- large-scale non-EN, highly-abstractive summ dataset; benchmark results; lots of error analysis (ROUGE awful for Indo)
— Tim Baldwin (@eltimster) November 3, 2020
12. 83% ImageNet Accuracy in One Hour
Arissa Wongpanich, Hieu Pham, James Demmel, Mingxing Tan, Quoc Le, Yang You, Sameer Kumar
EfficientNets are a family of state-of-the-art image classification models based on efficiently scaled convolutional neural networks. Currently, EfficientNets can take on the order of days to train; for example, training an EfficientNet-B0 model takes 23 hours on a Cloud TPU v2-8 node. In this paper, we explore techniques to scale up the training of EfficientNets on TPU-v3 Pods with 2048 cores, motivated by speedups that can be achieved when training at such scales. We discuss optimizations required to scale training to a batch size of 65536 on 1024 TPU-v3 cores, such as selecting large batch optimizers and learning rate schedules as well as utilizing distributed evaluation and batch normalization techniques. Additionally, we present timing and performance benchmarks for EfficientNet models trained on the ImageNet dataset in order to analyze the behavior of EfficientNets at scale. With our optimizations, we are able to train EfficientNet on ImageNet to an accuracy of 83% in 1 hour and 4 minutes.
83% ImageNet Accuracy in One Hour
— AK (@ak92501) November 3, 2020
pdf: https://t.co/AC6S009ztd
abs: https://t.co/mVUiM4LbYC pic.twitter.com/JzJ7aYUPER
13. Dual-decoder Transformer for Joint Automatic Speech Recognition and Multilingual Speech Translation
Hang Le, Juan Pino, Changhan Wang, Jiatao Gu, Didier Schwab, Laurent Besacier
We introduce dual-decoder Transformer, a new model architecture that jointly performs automatic speech recognition (ASR) and multilingual speech translation (ST). Our models are based on the original Transformer architecture (Vaswani et al., 2017) but consist of two decoders, each responsible for one task (ASR or ST). Our major contribution lies in how these decoders interact with each other: one decoder can attend to different information sources from the other via a dual-attention mechanism. We propose two variants of these architectures corresponding to two different levels of dependencies between the decoders, called the parallel and cross dual-decoder Transformers, respectively. Extensive experiments on the MuST-C dataset show that our models outperform the previously-reported highest translation performance in the multilingual settings, and outperform as well bilingual one-to-one results. Furthermore, our parallel models demonstrate no trade-off between ASR and ST compared to the vanilla multi-task architecture. Our code and pre-trained models are available at https://github.com/formiel/speech-translation.
Happy to share our recent work on a new architecture called the dual-decoder Transformer for joint speech recognition and multilingual speech translation (oral presentation @coling2020).
— Hang Le (@formiel) November 3, 2020
Paper: https://t.co/uamKw0h2Ht
Code: https://t.co/XSuDBz62dF pic.twitter.com/UHEJ3s1VGF
14. AGAIN-VC: A One-shot Voice Conversion using Activation Guidance and Adaptive Instance Normalization
Yen-Hao Chen, Da-Yi Wu, Tsung-Han Wu, Hung-yi Lee
Recently, voice conversion (VC) has been widely studied. Many VC systems use disentangle-based learning techniques to separate the speaker and the linguistic content information from a speech signal. Subsequently, they convert the voice by changing the speaker information to that of the target speaker. To prevent the speaker information from leaking into the content embeddings, previous works either reduce the dimension or quantize the content embedding as a strong information bottleneck. These mechanisms somehow hurt the synthesis quality. In this work, we propose AGAIN-VC, an innovative VC system using Activation Guidance and Adaptive Instance Normalization. AGAIN-VC is an auto-encoder-based model, comprising of a single encoder and a decoder. With a proper activation as an information bottleneck on content embeddings, the trade-off between the synthesis quality and the speaker similarity of the converted speech is improved drastically. This one-shot VC system obtains the best performance regardless of the subjective or objective evaluations.
AGAIN-VC: A One-shot Voice Conversion using Activation Guidance and Adaptive Instance Normalization
— AK (@ak92501) November 3, 2020
pdf: https://t.co/hgapmKuZe4
abs: https://t.co/FdbLiEMmmd
github: https://t.co/2XPu2FV5at
project page: https://t.co/4qEau4P4gM pic.twitter.com/8hzJxGl5Yb
15. Learning to Represent Action Values as a Hypergraph on the Action Vertices
Arash Tavakoli, Mehdi Fatemi, Petar Kormushev
Action-value estimation is a critical component of many reinforcement learning (RL) methods whereby sample complexity relies heavily on how fast a good estimator for action value can be learned. By viewing this problem through the lens of representation learning, good representations of both state and action can facilitate action-value estimation. While advances in deep learning have seamlessly driven progress in learning state representations, given the specificity of the notion of agency to RL, little attention has been paid to learning action representations. We conjecture that leveraging the combinatorial structure of multi-dimensional action spaces is a key ingredient for learning good representations of action. To test this, we set forth the action hypergraph networks framework---a class of functions for learning action representations with a relational inductive bias. Using this framework we realise an agent class based on a combination with deep Q-networks, which we dub hypergraph Q-networks. We show the effectiveness of our approach on a myriad of domains: illustrative prediction problems under minimal confounding effects, Atari 2600 games, and physical control benchmarks.
I am thrilled to share Action Hypergraph Networks, a class of models for learning action representations! 🐙 🎉
— Arash Tavakoli (@arshtvk) October 29, 2020
Combine in succession with any model for learning state representations (e.g. CNN, RNN, GNN) & train without any change to the RL loss.
Paper: https://t.co/U1BVEYPetc pic.twitter.com/WDOVgNUCGM
16. Image Inpainting with Learnable Feature Imputation
Håkon Hukkelås, Frank Lindseth, Rudolf Mester
A regular convolution layer applying a filter in the same way over known and unknown areas causes visual artifacts in the inpainted image. Several studies address this issue with feature re-normalization on the output of the convolution. However, these models use a significant amount of learnable parameters for feature re-normalization, or assume a binary representation of the certainty of an output. We propose (layer-wise) feature imputation of the missing input values to a convolution. In contrast to learned feature re-normalization, our method is efficient and introduces a minimal number of parameters. Furthermore, we propose a revised gradient penalty for image inpainting, and a novel GAN architecture trained exclusively on adversarial loss. Our quantitative evaluation on the FDF dataset reflects that our revised gradient penalty and alternative convolution improves generated image quality significantly. We present comparisons on CelebA-HQ and Places2 to current state-of-the-art to validate our model.
Image Inpainting with Learnable Feature Imputation
— AK (@ak92501) November 3, 2020
pdf: https://t.co/6d5H5pnWXE
abs: https://t.co/wy8UD4jCsI pic.twitter.com/a0kgjWk6S8