1. Explaining Deep Neural Networks
Oana-Maria Camburu
Deep neural networks are becoming more and more popular due to their revolutionary success in diverse areas, such as computer vision, natural language processing, and speech recognition. However, the decision-making processes of these models are generally not interpretable to users. In various domains, such as healthcare, finance, or law, it is critical to know the reasons behind a decision made by an artificial intelligence system. Therefore, several directions for explaining neural models have recently been explored. In this thesis, I investigate two major directions for explaining deep neural networks. The first direction consists of feature-based post-hoc explanatory methods, that is, methods that aim to explain an already trained and fixed model (post-hoc), and that provide explanations in terms of input features, such as tokens for text and superpixels for images (feature-based). The second direction consists of self-explanatory neural models that generate natural language explanations, that is, models that have a built-in module that generates explanations for the predictions of the model.
Excited to share my PhD thesis on Explaining Deep Neural Networks: https://t.co/ZVMSImo3b6!
— Oana Camburu (@oanacamb) October 6, 2020
Very grateful to my supervisors and to all the other amazing people I have met during this journey! pic.twitter.com/Uwx6D6qxtc
2. Evaluating Progress on Machine Learning for Longitudinal Electronic Healthcare Data
David Bellamy, Leo Celi, Andrew L. Beam
The Large Scale Visual Recognition Challenge based on the well-known Imagenet dataset catalyzed an intense flurry of progress in computer vision. Benchmark tasks have propelled other sub-fields of machine learning forward at an equally impressive pace, but in healthcare it has primarily been image processing tasks, such as in dermatology and radiology, that have experienced similar benchmark-driven progress. In the present study, we performed a comprehensive review of benchmarks in medical machine learning for structured data, identifying one based on the Medical Information Mart for Intensive Care (MIMIC-III) that allows the first direct comparison of predictive performance and thus the evaluation of progress on four clinical prediction tasks: mortality, length of stay, phenotyping, and patient decompensation. We find that little meaningful progress has been made over a 3 year period on these tasks, despite significant community engagement. Through our meta-analysis, we find that the performance of deep recurrent models is only superior to logistic regression on certain tasks. We conclude with a synthesis of these results, possible explanations, and a list of desirable qualities for future benchmarks in medical machine learning.
Are we making meaningful progress on machine learning for EHR data?
— Andrew Beam (@AndrewLBeam) October 6, 2020
New preprint with @DavidRBellamy and Leo Celi tries to answer this question through the lens of benchmarks and the answer is, unfortunately, "probably not"
Paper: https://t.co/KjgjvC0eWQ
Some highlights 👇 pic.twitter.com/5xZf5iqJQs
3. End-to-End Differentiable Molecular Mechanics Force Field Construction
Yuanqing Wang, Josh Fass, John D. Chodera
- retweets: 1849, favorites: 169 (10/07/2020 09:23:54)
- links: abs | pdf
- physics.comp-ph | cs.AI
Molecular mechanics (MM) potentials have long been a workhorse of computational chemistry. Leveraging accuracy and speed, these functional forms find use in a wide variety of applications from rapid virtual screening to detailed free energy calculations. Traditionally, MM potentials have relied on human-curated, inflexible, and poorly extensible discrete chemical perception rules (atom types) for applying parameters to molecules or biopolymers, making them difficult to optimize to fit quantum chemical or physical property data. Here, we propose an alternative approach that uses graph nets to perceive chemical environments, producing continuous atom embeddings from which valence and nonbonded parameters can be predicted using a feed-forward neural network. Since all stages are built using smooth functions, the entire process of chemical perception and parameter assignment is differentiable end-to-end with respect to model parameters, allowing new force fields to be easily constructed, extended, and applied to arbitrary molecules. We show that this approach has the capacity to reproduce legacy atom types and can be fit to MM and QM energies and forces, among other targets.
Biomolecular force field parameterization enters the modern age! This new preprint from @YuanqingWang and @maxentile shows how graph nets can directly assign MM force field parameters, easily learning existing MM force fields or directly fitting QM data.https://t.co/xPlNHpqee4 pic.twitter.com/Vj08tkRByM
— John Chodera (he/him) (@jchodera) October 6, 2020
4. JSSS: free Japanese speech corpus for summarization and simplification
Shinnosuke Takamichi, Mamoru Komachi, Naoko Tanji, Hiroshi Saruwatari
In this paper, we construct a new Japanese speech corpus for speech-based summarization and simplification, “JSSS” (pronounced “j-triple-s”). Given the success of reading-style speech synthesis from short-form sentences, we aim to design more difficult tasks for delivering information to humans. Our corpus contains voices recorded for two tasks that have a role in providing information under constraints: duration-constrained text-to-speech summarization and speaking-style simplification. It also contains utterances of long-form sentences as an optional task. This paper describes how we designed the corpus, which is available on our project page.
We are pleased to share our new Japanese speech corpus for summarization & simplification!
— Shinnosuke Takamichi (高道 慎之介) (@forthshinji) October 6, 2020
要約と平易化に関する日本語音声コーパスをリリースしました!
pdf: https://t.co/NaFtAlcz18
data: https://t.co/lnHov4IED1
1 / n pic.twitter.com/XxrGZR0Yob
5. Representational aspects of depth and conditioning in normalizing flows
Frederic Koehler, Viraj Mehta, Andrej Risteski
Normalizing flows are among the most popular paradigms in generative modeling, especially for images, primarily because we can efficiently evaluate the likelihood of a data point. Normalizing flows also come with difficulties: models which produce good samples typically need to be extremely deep — which comes with accompanying vanishing/exploding gradient problems. Relatedly, they are often poorly conditioned since typical training data like images intuitively are lower-dimensional, and the learned maps often have Jacobians that are close to being singular. In our paper, we tackle representational aspects around depth and conditioning of normalizing flows — both for general invertible architectures, and for a particular common architecture — affine couplings. For general invertible architectures, we prove that invertibility comes at a cost in terms of depth: we show examples where a much deeper normalizing flow model may need to be used to match the performance of a non-invertible generator. For affine couplings, we first show that the choice of partitions isn’t a likely bottleneck for depth: we show that any invertible linear map (and hence a permutation) can be simulated by a constant number of affine coupling layers, using a fixed partition. This shows that the extra flexibility conferred by 1x1 convolution layers, as in GLOW, can in principle be simulated by increasing the size by a constant factor. Next, in terms of conditioning, we show that affine couplings are universal approximators — provided the Jacobian of the model is allowed to be close to singular. We furthermore empirically explore the benefit of different kinds of padding — a common strategy for improving conditioning — on both synthetic and real-life datasets.
Representational aspects of depth and conditioning in normalizing flows. #DeepLearning #BigData #Analytics #RStats #Python #IIoT #Java #JavaScript #ReactJS #Serverless #IoT #Linux #100DaysOfCode #Coding #Programming #Cloud #DataScience #MachineLearning #AIhttps://t.co/WW18STTb8k pic.twitter.com/x8OdEnyYo2
— Marcus Borba (@marcusborba) October 6, 2020
6. On Losses for Modern Language Models
Stephane Aroca-Ouellette, Frank Rudzicz
BERT set many state-of-the-art results over varied NLU benchmarks by pre-training over two tasks: masked language modelling (MLM) and next sentence prediction (NSP), the latter of which has been highly criticized. In this paper, we 1) clarify NSP’s effect on BERT pre-training, 2) explore fourteen possible auxiliary pre-training tasks, of which seven are novel to modern language models, and 3) investigate different ways to include multiple tasks into pre-training. We show that NSP is detrimental to training due to its context splitting and shallow semantic signal. We also identify six auxiliary pre-training tasks — sentence ordering, adjacent sentence prediction, TF prediction, TF-IDF prediction, a FastSent variant, and a Quick Thoughts variant — that outperform a pure MLM baseline. Finally, we demonstrate that using multiple tasks in a multi-task pre-training framework provides better results than using any single auxiliary task. Using these methods, we outperform BERT Base on the GLUE benchmark using fewer than a quarter of the training tokens.
On Losses for Modern Language Models
— Aran Komatsuzaki (@arankomatsuzaki) October 6, 2020
Outperform BERT_Base on the GLUE benchmark using fewer than a quarter of the training tokens by exploring fourteen possible auxiliary pre-training tasks.https://t.co/4zOeUEasnD pic.twitter.com/Yzpv2zYgsp
7. Supporting large-scale image recognition with out-of-domain samples
Christof Henkel, Philipp Singer
This article presents an efficient end-to-end method to perform instance-level recognition employed to the task of labeling and ranking landmark images. In a first step, we embed images in a high dimensional feature space using convolutional neural networks trained with an additive angular margin loss and classify images using visual similarity. We then efficiently re-rank predictions and filter noise utilizing similarity to out-of-domain images. Using this approach we achieved the 1st place in the 2020 edition of the Google Landmark Recognition challenge.
Published a paper describing our (\w @kagglingdieter) winning solution to GLR2020: "Supporting large-scale image recognition with out-of-domain samples" https://t.co/RBfH9kz9Lu Code will follow soon #kaggle
— Philipp Singer (@ph_singer) October 6, 2020
8. A rigorous and robust quantum speed-up in supervised machine learning
Yunchao Liu, Srinivasan Arunachalam, Kristan Temme
Over the past few years several quantum machine learning algorithms were proposed that promise quantum speed-ups over their classical counterparts. Most of these learning algorithms assume quantum access to data; and it is unclear if quantum speed-ups still exist without making these strong assumptions. In this paper, we establish a rigorous quantum speed-up for supervised classification using a quantum learning algorithm that only requires classical access to data. Our quantum classifier is a conventional support vector machine that uses a fault-tolerant quantum computer to estimate a kernel function. Data samples are mapped to a quantum feature space and the kernel entries can be estimated as the transition amplitude of a quantum circuit. We construct a family of datasets and show that no classical learner can classify the data inverse-polynomially better than random guessing, assuming the widely-believed hardness of the discrete logarithm problem. Meanwhile, the quantum classifier achieves high accuracy and is robust against additive errors in the kernel entries that arise from finite sampling statistics.
Could be real: A rigorous and robust quantum speed-up in supervised machine learning
— Spooky Quantum Correlations (@quantum_graeme) October 6, 2020
https://t.co/NyfEsSzMKu
9. D3Net: Densely connected multidilated DenseNet for music source separation
Naoya Takahashi, Yuki Mitsufuji
Music source separation involves a large input field to model a long-term dependence of an audio signal. Previous convolutional neural network (CNN) -based approaches address the large input field modeling using sequentially down- and up-sampling feature maps or dilated convolution. In this paper, we claim the importance of a rapid growth of a receptive field and a simultaneous modeling of multi-resolution data in a single convolution layer, and propose a novel CNN architecture called densely connected dilated DenseNet (D3Net). D3Net involves a novel multi-dilated convolution that has different dilation factors in a single layer to model different resolutions simultaneously. By combining the multi-dilated convolution with DenseNet architecture, D3Net avoids the aliasing problem that exists when we naively incorporate the dilated convolution in DenseNet. Experimental results on MUSDB18 dataset show that D3Net achieves state-of-the-art performance with an average signal to distortion ratio (SDR) of 6.01 dB.
Happy to share our new work on music source separation that obtained SOTA result on MUSDB18. We improved MMDenseNet architecture by incorporating multidilated convolution. https://t.co/0ND2RDOH4O pic.twitter.com/JRzyNCVkAK
— Naoya Takahashi (@zuNaoya) October 6, 2020
We've been silent for a few years about our new model since the last SiSEC, but it does not mean that we've stopped pursuing the best performance in music separation. Here is our latest model D3Net. #creativeAIhttps://t.co/7YoMpQ6X4h https://t.co/jRunfVCb4j
— Yuki Mitsufuji (@mittu1204) October 6, 2020
10. Towards ML Engineering: A Brief History Of TensorFlow Extended (TFX)
Konstantinos, Katsiapis, Abhijit Karmarkar, Ahmet Altay, Aleksandr Zaks, Neoklis Polyzotis, Anusha Ramesh, Ben Mathes, Gautam Vasudevan, Irene Giannoumis, Jarek Wilkiewicz, Jiri Simsa, Justin Hong, Mitch Trott, Noé Lutz, Pavel A. Dournov, Robert Crowe, Sarah Sirajuddin, Tris Brian Warkentin, Zhitao Li
Software Engineering, as a discipline, has matured over the past 5+ decades. The modern world heavily depends on it, so the increased maturity of Software Engineering was an eventuality. Practices like testing and reliable technologies help make Software Engineering reliable enough to build industries upon. Meanwhile, Machine Learning (ML) has also grown over the past 2+ decades. ML is used more and more for research, experimentation and production workloads. ML now commonly powers widely-used products integral to our lives. But ML Engineering, as a discipline, has not widely matured as much as its Software Engineering ancestor. Can we take what we have learned and help the nascent field of applied ML evolve into ML Engineering the way Programming evolved into Software Engineering [1]? In this article we will give a whirlwind tour of Sibyl [2] and TensorFlow Extended (TFX) [3], two successive end-to-end (E2E) ML platforms at Alphabet. We will share the lessons learned from over a decade of applied ML built on these platforms, explain both their similarities and their differences, and expand on the shifts (both mental and technical) that helped us on our journey. In addition, we will highlight some of the capabilities of TFX that help realize several aspects of ML Engineering. We argue that in order to unlock the gains ML can bring, organizations should advance the maturity of their ML teams by investing in robust ML infrastructure and promoting ML Engineering education. We also recommend that before focusing on cutting-edge ML modeling techniques, product leaders should invest more time in adopting interoperable ML platforms for their organizations. In closing, we will also share a glimpse into the future of TFX.
Towards ML Engineering - a decade of production ML experience in one article: our #TFX paper is out! "In this article we will give a whirlwind tour of Sibyl and @TensorFlow Extended (TFX), two successive end-to-end (E2E) ML platforms at Alphabet."https://t.co/K3YQof2kK8 pic.twitter.com/jRIJKBdmtI
— Jarek Wilkiewicz (@wjarek) October 6, 2020
11. Sharpness-Aware Minimization for Efficiently Improving Generalization
Pierre Foret, Ariel Kleiner, Hossein Mobahi, Behnam Neyshabur
In today’s heavily overparameterized models, the value of the training loss provides few guarantees on model generalization ability. Indeed, optimizing only the training loss value, as is commonly done, can easily lead to suboptimal model quality. Motivated by the connection between geometry of the loss landscape and generalization---including a generalization bound that we prove here---we introduce a novel, effective procedure for instead simultaneously minimizing loss value and loss sharpness. In particular, our procedure, Sharpness-Aware Minimization (SAM), seeks parameters that lie in neighborhoods having uniformly low loss; this formulation results in a min-max optimization problem on which gradient descent can be performed efficiently. We present empirical results showing that SAM improves model generalization across a variety of benchmark datasets (e.g., CIFAR-{10, 100}, ImageNet, finetuning tasks) and models, yielding novel state-of-the-art performance for several. Additionally, we find that SAM natively provides robustness to label noise on par with that provided by state-of-the-art procedures that specifically target learning with noisy labels.
Introducing SAM: An easy-to-use algorithm derived by connecting PAC Bayesian bounds and geometry of the loss landscape. Achieves SOTA on benchmark image tasks (0.3% error on CIFAR10, 3.9% on CIFAR100) and drastically improves label noise robustness.https://t.co/aONWVTPZsT
— Pierre Foret (@Foret_p) October 6, 2020
12. My Body is a Cage: the Role of Morphology in Graph-Based Incompatible Control
Vitaly Kurin, Maximilian Igl, Tim Rocktäschel, Wendelin Boehmer, Shimon Whiteson
Multitask Reinforcement Learning is a promising way to obtain models with better performance, generalisation, data efficiency, and robustness. Most existing work is limited to compatible settings, where the state and action space dimensions are the same across tasks. Graph Neural Networks (GNN) are one way to address incompatible environments, because they can process graphs of arbitrary size. They also allow practitioners to inject biases encoded in the structure of the input graph. Existing work in graph-based continuous control uses the physical morphology of the agent to construct the input graph, i.e., encoding limb features as node labels and using edges to connect the nodes if their corresponded limbs are physically connected. In this work, we present a series of ablations on existing methods that show that morphological information encoded in the graph does not improve their performance. Motivated by the hypothesis that any benefits GNNs extract from the graph structure are outweighed by difficulties they create for message passing, we also propose Amorpheus, a transformer-based approach. Further results show that, while Amorpheus ignores the morphological information that GNNs encode, it nonetheless substantially outperforms GNN-based methods.
Meet Amorpheus https://t.co/8VgGw5OE5O! Surprisingly, and contrary to previous beliefs, we show that GNNs do not exploit morphological information when doing incompatible continuous control -> we can use transformers here! Work with @MaxiIgl @_rockt @WendelinBoehmer @shimon8282 pic.twitter.com/BZ2uI5aDhe
— yobibyte (@y0b1byte) October 6, 2020
13. Pareto Probing: Trading Off Accuracy for Complexity
Tiago Pimentel, Naomi Saphra, Adina Williams, Ryan Cotterell
The question of how to probe contextual word representations in a way that is principled and useful has seen significant recent attention. In our contribution to this discussion, we argue, first, for a probe metric that reflects the trade-off between probe complexity and performance: the Pareto hypervolume. To measure complexity, we present a number of parametric and non-parametric metrics. Our experiments with such metrics show that probe’s performance curves often fail to align with widely accepted rankings between language representations (with, e.g., non-contextual representations outperforming contextual ones). These results lead us to argue, second, that common simplistic probe tasks such as POS labeling and dependency arc labeling, are inadequate to evaluate the properties encoded in contextual word representations. We propose full dependency parsing as an example probe task, and demonstrate it with the Pareto hypervolume. In support of our arguments, the results of this illustrative experiment conform closer to accepted rankings among contextual word representations.
Should our probes be simple or accurate? @nsaphra and I reframe this trade off with Pareto fronts in our new #emnlp2020 paper with @adinamwilliams and @ryandcotterell https://t.co/bNGObN6xME
— Tiago Pimentel (@tpimentelms) October 6, 2020
Follow the thread :) pic.twitter.com/e234V2TvWV
14. MagGAN: High-Resolution Face Attribute Editing with Mask-Guided Generative Adversarial Network
Yi Wei, Zhe Gan, Wenbo Li, Siwei Lyu, Ming-Ching Chang, Lei Zhang, Jianfeng Gao, Pengchuan Zhang
We present Mask-guided Generative Adversarial Network (MagGAN) for high-resolution face attribute editing, in which semantic facial masks from a pre-trained face parser are used to guide the fine-grained image editing process. With the introduction of a mask-guided reconstruction loss, MagGAN learns to only edit the facial parts that are relevant to the desired attribute changes, while preserving the attribute-irrelevant regions (e.g., hat, scarf for modification `To Bald’). Further, a novel mask-guided conditioning strategy is introduced to incorporate the influence region of each attribute change into the generator. In addition, a multi-level patch-wise discriminator structure is proposed to scale our model for high-resolution () face editing. Experiments on the CelebA benchmark show that the proposed method significantly outperforms prior state-of-the-art approaches in terms of both image quality and editing performance.
MagGAN: High-Resolution Face Attribute Editing with Mask-Guided Generative Adversarial Network
— AK (@ak92501) October 6, 2020
pdf: https://t.co/YYYguAapDI
abs: https://t.co/Ys0dFKHUTI pic.twitter.com/3qUWjyGI6k
15. No quantum speedup over gradient descent for non-smooth convex optimization
Ankit Garg, Robin Kothari, Praneeth Netrapalli, Suhail Sherif
We study the first-order convex optimization problem, where we have black-box access to a (not necessarily smooth) function and its (sub)gradient. Our goal is to find an -approximate minimum of starting from a point that is distance at most from the true minimum. If is -Lipschitz, then the classic gradient descent algorithm solves this problem with queries. Importantly, the number of queries is independent of the dimension and gradient descent is optimal in this regard: No deterministic or randomized algorithm can achieve better complexity that is still independent of the dimension . In this paper we reprove the randomized lower bound of using a simpler argument than previous lower bounds. We then show that although the function family used in the lower bound is hard for randomized algorithms, it can be solved using quantum queries. We then show an improved lower bound against quantum algorithms using a different set of instances and establish our main result that in general even quantum algorithms need queries to solve the problem. Hence there is no quantum speedup over gradient descent for black-box first-order convex optimization without further assumptions on the function family.
New paper on the arXiv with Ankit Garg (@Ankit_garg06), Praneeth Netrapalli (@PNetrapalli), and Suhail Sherif (@shlshrf): "No quantum speedup over gradient descent
— Robin Kothari (@RobinKothari) October 6, 2020
for non-smooth convex optimization."https://t.co/zqz2sIWNFg pic.twitter.com/NQWLzRjyuL
16. Optimal bounds for approximate counting
Jelani Nelson, Huacheng Yu
Storing a counter incremented times would naively consume bits of memory. In 1978 Morris described the very first streaming algorithm: the “Morris Counter” [Morris78]. His algorithm has been shown to use bits of memory to provide a -approximation with probability to the counter’s value. We provide a new simple algorithm with a simple analysis showing that bits suffice for the same task, i.e. an exponentially improved dependence on the inverse failure probability. We then provide a more technical analysis showing that the original Morris Counter itself, after a minor but necessary tweak, actually also enjoys this same improved upper bound. Lastly, we prove a new lower bound for this task showing optimality of our upper bound. We thus completely resolve the asymptotic space complexity of approximate counting.
A cool looking paper by Jelani Nelson (@minilek) and Huacheng Yu: presents a better analysis of the Morris counter, getting an exponentially better bound in terms of the error parameter.
— Gautam Kamath (@thegautamkamath) October 6, 2020
Be sure to check out Section 4 on the value of implementation :)https://t.co/Euev7DKduw pic.twitter.com/cX4MMKCuur
17. Non-trivial informational closure of a Bayesian hyperparameter
Martin Biehl, Ryota Kanai
We investigate the non-trivial informational closure (NTIC) of a Bayesian hyperparameter inferring the underlying distribution of an identically and independently distributed finite random variable. For this we embed both the Bayesian hyper-parameter updating process and the random data process into a Markov chain. The original publication by Bertschinger et al. (2006) mentioned that NTIC may be able to capture an abstract notion of modeling that is agnostic to the specific internal structure of and existence of explicit representations within the modeling process. The Bayesian hyperparameter is of interest since it has a well defined interpretation as a model of the data process and at the same time its dynamics can be specified without reference to this interpretation. On the one hand we show explicitly that the NTIC of the hyperparameter increases indefinitely over time. On the other hand we attempt to establish a connection between a quantity that is a feature of the interpretation of the hyperparameter as a model, namely the information gain, and the one-step pointwise NTIC which is a quantity that does not depend on this interpretation. We find that in general we cannot use the one-step pointwise NTIC as an indicator for information gain. We hope this exploratory work can lead to further rigorous studies of the relation between NTIC and modeling.
New paper on NTIC with Martin Biehl (@36zimmer ) to be presented at SSCI 2020.
— Ryota Kanai (@kanair) October 6, 2020
Non-trivial informational closure of a Bayesian hyperparameter.https://t.co/1NR7PvHZvr
18. Codes for Distributed Storage
Vinayak Ramkumar, Myna Vajha, S. B. Balaji, M. Nikhil Krishnan, Birenjith Sasidharan, P. Vijay Kumar
This chapter deals with the topic of designing reliable and efficient codes for the storage and retrieval of large quantities of data over storage devices that are prone to failure. For long, the traditional objective has been one of ensuring reliability against data loss while minimizing storage overhead. More recently, a third concern has surfaced, namely of the need to efficiently recover from the failure of a single storage unit, corresponding to recovery from the erasure of a single code symbol. We explain here, how coding theory has evolved to tackle this fresh challenge.
A comprehensive and updated survey on Coding theory for Distributed storage, a topic very close to my heart since my PhD. Beautiful combinatorial constructions and real impact in systems like Hadoop HDFS, Azure cloud and Ceph. https://t.co/5YUqn9QEp4
— Alex Dimakis (@AlexGDimakis) October 6, 2020
19. Mossad: Defeating Software Plagiarism Detection
Breanna Devore-McDonald, Emery D. Berger
Automatic software plagiarism detection tools are widely used in educational settings to ensure that submitted work was not copied. These tools have grown in use together with the rise in enrollments in computer science programs and the widespread availability of code on-line. Educators rely on the robustness of plagiarism detection tools; the working assumption is that the effort required to evade detection is as high as that required to actually do the assigned work. This paper shows this is not the case. It presents an entirely automatic program transformation approach, Mossad, that defeats popular software plagiarism detection tools. Mossad comprises a framework that couples techniques inspired by genetic programming with domain-specific knowledge to effectively undermine plagiarism detectors. Mossad is effective at defeating four plagiarism detectors, including Moss and JPlag. Mossad is both fast and effective: it can, in minutes, generate modified versions of programs that are likely to escape detection. More insidiously, because of its non-deterministic approach, Mossad can, from a single program, generate dozens of variants, which are classified as no more suspicious than legitimate assignments. A detailed study of Mossad across a corpus of real student assignments demonstrates its efficacy at evading detection. A user study shows that graduate student assistants consistently rate Mossad-generated code as just as readable as authentic student code. This work motivates the need for both research on more robust plagiarism detection tools and greater integration of naturally plagiarism-resistant methodologies like code review into computer science education.
"Mossad: Defeating Software Plagiarism Detection" (including Moss and JPlag) - https://t.co/7AAU2NDRvy, to appear OOPSLA 2020 @bdevorem @emeryberger pic.twitter.com/y1gnfeioAv
— Emery Berger (@emeryberger) October 6, 2020
20. A Survey of Unsupervised Dependency Parsing
Wenjuan Han, Yong Jiang, Hwee Tou Ng, Kewei Tu
Syntactic dependency parsing is an important task in natural language processing. Unsupervised dependency parsing aims to learn a dependency parser from sentences that have no annotation of their correct parse trees. Despite its difficulty, unsupervised parsing is an interesting research direction because of its capability of utilizing almost unlimited unannotated text data. It also serves as the basis for other research in low-resource parsing. In this paper, we survey existing approaches to unsupervised dependency parsing, identify two major classes of approaches, and discuss recent trends. We hope that our survey can provide insights for researchers and facilitate future research on this topic.
21. Episodic Memory for Learning Subjective-Timescale Models
Alexey Zakharov, Matthew Crosby, Zafeirios Fountas
- retweets: 40, favorites: 19 (10/07/2020 09:23:58)
- links: abs | pdf
- cs.LG | cs.AI | q-bio.NC | stat.ML
In model-based learning, an agent’s model is commonly defined over transitions between consecutive states of an environment even though planning often requires reasoning over multi-step timescales, with intermediate states either unnecessary, or worse, accumulating prediction error. In contrast, intelligent behaviour in biological organisms is characterised by the ability to plan over varying temporal scales depending on the context. Inspired by the recent works on human time perception, we devise a novel approach to learning a transition dynamics model, based on the sequences of episodic memories that define the agent’s subjective timescale - over which it learns world dynamics and over which future planning is performed. We implement this in the framework of active inference and demonstrate that the resulting subjective-timescale model (STM) can systematically vary the temporal extent of its predictions while preserving the same computational efficiency. Additionally, we show that STM predictions are more likely to introduce future salient events (for example new objects coming into view), incentivising exploration of new areas of the environment. As a result, STM produces more informative action-conditioned roll-outs that assist the agent in making better decisions. We validate significant improvement in our STM agent’s performance in the Animal-AI environment against a baseline system, trained using the environment’s objective-timescale dynamics.