1. Entropy as a Topological Operad Derivation
Tai-Danae Bradley
We share a small connection between information theory, algebra, and topology - namely, a correspondence between Shannon entropy and derivations of the operad of topological simplices. We begin with a brief review of operads and their representations with topological simplices and the real line as the main example. We then give a general definition for a derivation of an operad in any category with values in an abelian module over the operad. The main result is that Shannon entropy defines a derivation of the operad of topological simplices, and that for every derivation of this operad there exists a point at which it is given by a constant multiple of Shannon entropy. We show this is compatible with, and relies heavily on, a well-known characterization of entropy given by Faddeev in 1956 and a recent variation given by Leinster.
Today I’m happy to share a new bit of math connecting ideas from information theory, algebra, and topology - all in new preprint on the arXiv! (https://t.co/KPmLxGp3qt) This latest blog post takes a leisurely stroll through some of the ideas: https://t.co/TJkN3dSJIC pic.twitter.com/dawOPri6cS
— Tai-Danae Bradley (@math3ma) July 21, 2021
2. WikiGraphs: A Wikipedia Text - Knowledge Graph Paired Dataset
Luyu Wang, Yujia Li, Ozlem Aslan, Oriol Vinyals
We present a new dataset of Wikipedia articles each paired with a knowledge graph, to facilitate the research in conditional text generation, graph generation and graph representation learning. Existing graph-text paired datasets typically contain small graphs and short text (1 or few sentences), thus limiting the capabilities of the models that can be learned on the data. Our new dataset WikiGraphs is collected by pairing each Wikipedia article from the established WikiText-103 benchmark (Merity et al., 2016) with a subgraph from the Freebase knowledge graph (Bollacker et al., 2008). This makes it easy to benchmark against other state-of-the-art text generative models that are capable of generating long paragraphs of coherent text. Both the graphs and the text data are of significantly larger scale compared to prior graph-text paired datasets. We present baseline graph neural network and transformer model results on our dataset for 3 tasks: graph -> text generation, graph -> text retrieval and text -> graph retrieval. We show that better conditioning on the graph provides gains in generation and retrieval quality but there is still large room for improvement.
WikiGraphs: A Wikipedia Text - Knowledge Graph Paired Dataset
— AK (@ak92501) July 21, 2021
pdf: https://t.co/2OMY5lQ9B5
abs: https://t.co/vooPDrmo6q
github: https://t.co/GLC2UOylEQ
dataset of Wikipedia articles each paired with a knowledge graph pic.twitter.com/1vdCKbCANp
3. Mastering Visual Continuous Control: Improved Data-Augmented Reinforcement Learning
Denis Yarats, Rob Fergus, Alessandro Lazaric, Lerrel Pinto
We present DrQ-v2, a model-free reinforcement learning (RL) algorithm for visual continuous control. DrQ-v2 builds on DrQ, an off-policy actor-critic approach that uses data augmentation to learn directly from pixels. We introduce several improvements that yield state-of-the-art results on the DeepMind Control Suite. Notably, DrQ-v2 is able to solve complex humanoid locomotion tasks directly from pixel observations, previously unattained by model-free RL. DrQ-v2 is conceptually simple, easy to implement, and provides significantly better computational footprint compared to prior work, with the majority of tasks taking just 8 hours to train on a single GPU. Finally, we publicly release DrQ-v2’s implementation to provide RL practitioners with a strong and computationally efficient baseline.
Excited to release DrQ-v2! DrQ-v2 is more sample efficient, runs 3.5X faster than DrQ, and is the first model-free agent that solves humanoid from pixels.
— Denis Yarats (@denisyarats) July 21, 2021
co-authors: @rob_fergus, @alelazaric, @LerrelPinto
tech report: https://t.co/EKCaPHpnJ0
code: https://t.co/lsdSmGtLqo
1/N pic.twitter.com/qxqD02hrLW
4. Sequence-to-Sequence Piano Transcription with Transformers
Curtis Hawthorne, Ian Simon, Rigel Swavely, Ethan Manilow, Jesse Engel
Automatic Music Transcription has seen significant progress in recent years by training custom deep neural networks on large datasets. However, these models have required extensive domain-specific design of network architectures, input/output representations, and complex decoding schemes. In this work, we show that equivalent performance can be achieved using a generic encoder-decoder Transformer with standard decoding methods. We demonstrate that the model can learn to translate spectrogram inputs directly to MIDI-like output events for several transcription tasks. This sequence-to-sequence approach simplifies transcription by jointly modeling audio features and language-like output dependencies, thus removing the need for task-specific architectures. These results point toward possibilities for creating new Music Information Retrieval models by focusing on dataset creation and labeling rather than custom model design.
Excited to share our @ISMIR2021 paper: Sequence-to-Sequence Piano Transcription with Transformers
— Curtis Hawthorne (@fjord41) July 21, 2021
Generic encoder-decoder Transformer + Spectrogram inputs + MIDI-like event outputs = SotA results!https://t.co/jjw5cLaC4h
With @iansimon, @rigeljs, @ethanmanilow, and @jesseengel pic.twitter.com/1mMzkMSbxZ
Sequence-to-Sequence Piano Transcription with Transformers
— AK (@ak92501) July 21, 2021
pdf: https://t.co/R2uJBjupBo
abs: https://t.co/5NHsG3DTMI
Transformer architecture trained to map spectrograms to MIDI-like output events with no pretraining can achieve sota performance on automatic piano transcription pic.twitter.com/BxcAheL4cV
5. Large-scale graph representation learning with very deep GNNs and self-supervision
Ravichandra Addanki, Peter W. Battaglia, David Budden, Andreea Deac, Jonathan Godwin, Thomas Keck, Wai Lok Sibon Li, Alvaro Sanchez-Gonzalez, Jacklynn Stott, Shantanu Thakoor, Petar Veličković
- retweets: 770, favorites: 141 (07/22/2021 09:57:52)
- links: abs | pdf
- cs.LG | cs.AI | cs.SI | stat.ML
Effectively and efficiently deploying graph neural networks (GNNs) at scale remains one of the most challenging aspects of graph representation learning. Many powerful solutions have only ever been validated on comparatively small datasets, often with counter-intuitive outcomes — a barrier which has been broken by the Open Graph Benchmark Large-Scale Challenge (OGB-LSC). We entered the OGB-LSC with two large-scale GNNs: a deep transductive node classifier powered by bootstrapping, and a very deep (up to 50-layer) inductive graph regressor regularised by denoising objectives. Our models achieved an award-level (top-3) performance on both the MAG240M and PCQM4M benchmarks. In doing so, we demonstrate evidence of scalable self-supervised graph representation learning, and utility of very deep GNNs — both very important open issues. Our code is publicly available at: https://github.com/deepmind/deepmind-research/tree/master/ogb_lsc.
We release the full technical report & code for our OGB-LSC entry, in advance of our KDD Cup presentations! 🎉https://t.co/w4cQWr6iFd
— Petar Veličković (@PetarV_93) July 21, 2021
See thread 🧵 for our insights gathered while deploying large-scale GNNs!
with @PeterWBattaglia @davidmbudden @andreeadeac22 @SibonLi et al. pic.twitter.com/Aj3joZnIZ9
Large-scale graph representation learning with very deep GNNs and self-supervision
— AK (@ak92501) July 21, 2021
pdf: https://t.co/mziqqmzpiC
github: https://t.co/mH4XNdAvTf
achieved an award-level (top-3) performance on both the MAG240M and PCQM4M benchmarks pic.twitter.com/p7ytiTUMPk
6. SynthSeg: Domain Randomisation for Segmentation of Brain MRI Scans of any Contrast and Resolution
Benjamin Billot, Douglas N. Greve, Oula Puonti, Axel Thielscher, Koen Van Leemput, Bruce Fischl, Adrian V. Dalca, Juan Eugenio Iglesias
Despite advances in data augmentation and transfer learning, convolutional neural networks (CNNs) have difficulties generalising to unseen target domains. When applied to segmentation of brain MRI scans, CNNs are highly sensitive to changes in resolution and contrast: even within the same MR modality, decreases in performance can be observed across datasets. We introduce SynthSeg, the first segmentation CNN agnostic to brain MRI scans of any contrast and resolution. SynthSeg is trained with synthetic data sampled from a generative model inspired by Bayesian segmentation. Crucially, we adopt a \textit{domain randomisation} strategy where we fully randomise the generation parameters to maximise the variability of the training data. Consequently, SynthSeg can segment preprocessed and unpreprocessed real scans of any target domain, without retraining or fine-tuning. Because SynthSeg only requires segmentations to be trained (no images), it can learn from label maps obtained automatically from existing datasets of different populations (e.g., with atrophy and lesions), thus achieving robustness to a wide range of morphological variability. We demonstrate SynthSeg on 5,500 scans of 6 modalities and 10 resolutions, where it exhibits unparalleled generalisation compared to supervised CNNs, test time adaptation, and Bayesian segmentation. The code and trained model are available at https://github.com/BBillot/SynthSeg.
Preprint, code & model of SynthSeg are out! Obtain 1 mm segmentations of brain MRI scans of ANY contrast/resolution with a single CNN. Work with @LeemputKoen, @AdrianDalca, @BruceFischl et al@FreeSurferMRI
— Juan Eugenio Iglesias (@JuanEugenioIgl1) July 21, 2021
Try it now!
Code: https://t.co/D8AY16R6Ae
Paper: https://t.co/dfHPKu8YWQ pic.twitter.com/T7VdghnMp1
7. Rethinking the limiting dynamics of SGD: modified loss, phase space oscillations, and anomalous diffusion
Daniel Kunin, Javier Sagastuy-Brena, Lauren Gillespie, Eshed Margalit, Hidenori Tanaka, Surya Ganguli, Daniel L. K. Yamins
- retweets: 284, favorites: 92 (07/22/2021 09:57:53)
- links: abs | pdf
- cs.LG | cond-mat.stat-mech | q-bio.NC | stat.ML
In this work we explore the limiting dynamics of deep neural networks trained with stochastic gradient descent (SGD). We find empirically that long after performance has converged, networks continue to move through parameter space by a process of anomalous diffusion in which distance travelled grows as a power law in the number of gradient updates with a nontrivial exponent. We reveal an intricate interaction between the hyperparameters of optimization, the structure in the gradient noise, and the Hessian matrix at the end of training that explains this anomalous diffusion. To build this understanding, we first derive a continuous-time model for SGD with finite learning rates and batch sizes as an underdamped Langevin equation. We study this equation in the setting of linear regression, where we can derive exact, analytic expressions for the phase space dynamics of the parameters and their instantaneous velocities from initialization to stationarity. Using the Fokker-Planck equation, we show that the key ingredient driving these dynamics is not the original training loss, but rather the combination of a modified loss, which implicitly regularizes the velocity, and probability currents, which cause oscillations in phase space. We identify qualitative and quantitative predictions of this theory in the dynamics of a ResNet-18 model trained on ImageNet. Through the lens of statistical physics, we uncover a mechanistic origin for the anomalous limiting dynamics of deep neural networks trained with SGD.
Q. Do SGD trained networks converge in parameter space?
— Daniel Kunin (@KuninDaniel) July 21, 2021
A. No, they anomalously diffuse on the level sets of a modified loss!
co-led with @jvrsgsty
& @leg2015 @eshedmargalit @Hidenori8Tanaka @SuryaGanguli @dyaminshttps://t.co/UVZOeeN3Nm
1/10 pic.twitter.com/jOUz0ZA2iv
8. A quantum algorithm for training wide and deep classical neural networks
Alexander Zlokapa, Hartmut Neven, Seth Lloyd
Given the success of deep learning in classical machine learning, quantum algorithms for traditional neural network architectures may provide one of the most promising settings for quantum machine learning. Considering a fully-connected feedforward neural network, we show that conditions amenable to classical trainability via gradient descent coincide with those necessary for efficiently solving quantum linear systems. We propose a quantum algorithm to approximately train a wide and deep neural network up to error for a training set of size by performing sparse matrix inversion in time. To achieve an end-to-end exponential speedup over gradient descent, the data distribution must permit efficient state preparation and readout. We numerically demonstrate that the MNIST image dataset satisfies such conditions; moreover, the quantum algorithm matches the accuracy of the fully-connected network. Beyond the proven architecture, we provide empirical evidence for training of a convolutional neural network with pooling.
A quantum algorithm for training wide and deep classical neural networks
— AK (@ak92501) July 21, 2021
pdf: https://t.co/3MzkmNAvQT
quantum algorithm to approximately train a wide and deep neural network up to O(1/n) error for a training set of size n by performing sparse matrix inversion in O(log n) time pic.twitter.com/nGsAacpwYz
9. Neural Abstructions: Abstractions that Support Construction for Grounded Language Learning
Kaylee Burns, Christopher D. Manning, Li Fei-Fei
Although virtual agents are increasingly situated in environments where natural language is the most effective mode of interaction with humans, these exchanges are rarely used as an opportunity for learning. Leveraging language interactions effectively requires addressing limitations in the two most common approaches to language grounding: semantic parsers built on top of fixed object categories are precise but inflexible and end-to-end models are maximally expressive, but fickle and opaque. Our goal is to develop a system that balances the strengths of each approach so that users can teach agents new instructions that generalize broadly from a single example. We introduce the idea of neural abstructions: a set of constraints on the inference procedure of a label-conditioned generative model that can affect the meaning of the label in context. Starting from a core programming language that operates over abstructions, users can define increasingly complex mappings from natural language to actions. We show that with this method a user population is able to build a semantic parser for an open-ended house modification task in Minecraft. The semantic parser that results is both flexible and expressive: the percentage of utterances sourced from redefinitions increases steadily over the course of 191 total exchanges, achieving a final value of 28%.
Neural Abstructions: Abstractions that Support Construction for Grounded Language Learning
— AK (@ak92501) July 21, 2021
pdf: https://t.co/vVOWPSRysC
a set of constraints on the inference procedure of a label-conditioned generative model that can affect the meaning of the label in context pic.twitter.com/2B9S9D8gpA
10. Readability Research: An Interdisciplinary Approach
Sofie Beier, Sam Berlow, Esat Boucaud, Zoya Bylinskii, Tianyuan Cai, Jenae Cohn, Kathy Crowley, Stephanie L. Day, Tilman Dingler, Jonathan Dobres, Jennifer Healey, Rajiv Jain, Marjorie Jordan, Bernard Kerr, Qisheng Li, Dave B. Miller, Susanne Nobles, Alexandra Papoutsaki, Jing Qian, Tina Rezvanian, Shelley Rodrigo, Ben D. Sawyer, Shannon M. Sheppard, Bram Stein, Rick Treitman, Jen Vanek, Shaun Wallace, Benjamin Wolfe
Readability is on the cusp of a revolution. Fixed text is becoming fluid as a proliferation of digital reading devices rewrite what a document can do. As past constraints make way for more flexible opportunities, there is great need to understand how reading formats can be tuned to the situation and the individual. We aim to provide a firm foundation for readability research, a comprehensive framework for modern, multi-disciplinary readability research. Readability refers to aspects of visual information design which impact information flow from the page to the reader. Readability can be enhanced by changes to the set of typographical characteristics of a text. These aspects can be modified on-demand, instantly improving the ease with which a reader can process and derive meaning from text. We call on a multi-disciplinary research community to take up these challenges to elevate reading outcomes and provide the tools to do so effectively.
So excited to share our preprint on "Readability Research: An Interdisciplinary Approach" - a collaborative piece written by 28 different voices with the drive to make #readability better for all: https://t.co/gjzVfBNRPa #reading #science #tech #typography pic.twitter.com/M2PeDccpqw
— Zoya Bylinskii (@zoyathinks) July 21, 2021
11. Learn2Hop: Learned Optimization on Rough Landscapes
Amil Merchant, Luke Metz, Sam Schoenholz, Ekin Dogus Cubuk
Optimization of non-convex loss surfaces containing many local minima remains a critical problem in a variety of domains, including operations research, informatics, and material design. Yet, current techniques either require extremely high iteration counts or a large number of random restarts for good performance. In this work, we propose adapting recent developments in meta-learning to these many-minima problems by learning the optimization algorithm for various loss landscapes. We focus on problems from atomic structural optimization—finding low energy configurations of many-atom systems—including widely studied models such as bimetallic clusters and disordered silicon. We find that our optimizer learns a ‘hopping’ behavior which enables efficient exploration and improves the rate of low energy minima discovery. Finally, our learned optimizers show promising generalization with efficiency gains on never before seen tasks (e.g. new elements or compositions). Code will be made available shortly.
New paper on ML & physics at ICML!
— Ekin Dogus Cubuk (@ekindogus) July 21, 2021
Learn2Hop: Learned Optimization on Rough Landscapes
With Applications to Atomic Structural Optimization
We adapt learned optimizers for atomic structural optimization, and compare to baselines from physics.
abs: https://t.co/1k07toeNK0 pic.twitter.com/teeBkmezZW
12. Generative Video Transformer: Can Objects be the Words?
Yi-Fu Wu, Jaesik Yoon, Sungjin Ahn
Transformers have been successful for many natural language processing tasks. However, applying transformers to the video domain for tasks such as long-term video generation and scene understanding has remained elusive due to the high computational complexity and the lack of natural tokenization. In this paper, we propose the Object-Centric Video Transformer (OCVT) which utilizes an object-centric approach for decomposing scenes into tokens suitable for use in a generative video transformer. By factoring the video into objects, our fully unsupervised model is able to learn complex spatio-temporal dynamics of multiple interacting objects in a scene and generate future frames of the video. Our model is also significantly more memory-efficient than pixel-based models and thus able to train on videos of length up to 70 frames with a single 48GB GPU. We compare our model with previous RNN-based approaches as well as other possible video transformer baselines. We demonstrate OCVT performs well when compared to baselines in generating future frames. OCVT also develops useful representations for video reasoning, achieving start-of-the-art performance on the CATER task.
Generative Video Transformer: Can Objects be the Words?
— AK (@ak92501) July 21, 2021
pdf: https://t.co/6bCaZhfHzp
abs: https://t.co/COI5liESYz
a generative video transformer that leverages the recent advances in unsupervised object-centric representation learning pic.twitter.com/XZV0aYstag
13. Open Problem: Is There an Online Learning Algorithm That Learns Whenever Online Learning Is Possible?
Steve Hanneke
- retweets: 74, favorites: 66 (07/22/2021 09:57:54)
- links: abs | pdf
- cs.LG | cs.AI | math.PR | math.ST | stat.ML
This open problem asks whether there exists an online learning algorithm for binary classification that guarantees, for all target concepts, to make a sublinear number of mistakes, under only the assumption that the (possibly random) sequence of points X allows that such a learning algorithm can exist for that sequence. As a secondary problem, it also asks whether a specific concise condition completely determines whether a given (possibly random) sequence of points X admits the existence of online learning algorithms guaranteeing a sublinear number of mistakes for all target concepts.
Steve Hanneke putting his money where his mouth is. https://t.co/0x7UUj0LSa pic.twitter.com/dNvvODlEJG
— Gautam Kamath (@thegautamkamath) July 21, 2021
14. QVHighlights: Detecting Moments and Highlights in Videos via Natural Language Queries
Jie Lei, Tamara L. Berg, Mohit Bansal
Detecting customized moments and highlights from videos given natural language (NL) user queries is an important but under-studied topic. One of the challenges in pursuing this direction is the lack of annotated data. To address this issue, we present the Query-based Video Highlights (QVHighlights) dataset. It consists of over 10,000 YouTube videos, covering a wide range of topics, from everyday activities and travel in lifestyle vlog videos to social and political activities in news videos. Each video in the dataset is annotated with: (1) a human-written free-form NL query, (2) relevant moments in the video w.r.t. the query, and (3) five-point scale saliency scores for all query-relevant clips. This comprehensive annotation enables us to develop and evaluate systems that detect relevant moments as well as salient highlights for diverse, flexible user queries. We also present a strong baseline for this task, Moment-DETR, a transformer encoder-decoder model that views moment retrieval as a direct set prediction problem, taking extracted video and query representations as inputs and predicting moment coordinates and saliency scores end-to-end. While our model does not utilize any human prior, we show that it performs competitively when compared to well-engineered architectures. With weakly supervised pretraining using ASR captions, Moment-DETR substantially outperforms previous methods. Lastly, we present several ablations and visualizations of Moment-DETR. Data and code is publicly available at https://github.com/jayleicn/moment_detr
Presenting "QVHighlights": 10K YouTube videos dataset annotated w. human written queries, clip-wise relevance, highlightness/saliency scores & Moment-DETR model for joint moment localization + highlight/saliency predictionhttps://t.co/FnEF93FuCc
— Jie Lei (@jayleicn) July 21, 2021
tlberg @mohitban47 @uncnlp
1/n pic.twitter.com/vMqHPCkdPe
15. Analysis of Spatiotemporal Anomalies Using Persistent Homology: Case Studies with COVID-19 Data
Abigail Hickok, Deanna Needell, Mason A. Porter
- retweets: 90, favorites: 36 (07/22/2021 09:57:54)
- links: abs | pdf
- cs.CG | math.AT | physics.soc-ph | q-bio.PE
We develop a method for analyzing spatiotemporal anomalies in geospatial data using topological data analysis (TDA). To do this, we use persistent homology (PH), a tool from TDA that allows one to algorithmically detect geometric voids in a data set and quantify the persistence of these voids. We construct an efficient filtered simplicial complex (FSC) such that the voids in our FSC are in one-to-one correspondence with the anomalies. Our approach goes beyond simply identifying anomalies; it also encodes information about the relationships between anomalies. We use vineyards, which one can interpret as time-varying persistence diagrams (an approach for visualizing PH), to track how the locations of the anomalies change over time. We conduct two case studies using spatially heterogeneous COVID-19 data. First, we examine vaccination rates in New York City by zip code. Second, we study a year-long data set of COVID-19 case rates in neighborhoods in the city of Los Angeles.
We have an exciting new methods paper on arXiv: https://t.co/ZtFNoPSq2R
— Mason Porter (@masonporter) July 21, 2021
Title: Analysis of Spatiotemporal Anomalies Using Persistent Homology: Case Studies with COVID-19 Data
Authors: Abigail Hickok, Deanna Needell, Mason A. Porter pic.twitter.com/gtomMIonZE
16. Audio2Head: Audio-driven One-shot Talking-head Generation with Natural Head Motion
Suzhen Wang, Lincheng Li, Yu Ding, Changjie Fan, Xin Yu
We propose an audio-driven talking-head method to generate photo-realistic talking-head videos from a single reference image. In this work, we tackle two key challenges: (i) producing natural head motions that match speech prosody, and (ii) maintaining the appearance of a speaker in a large head motion while stabilizing the non-face regions. We first design a head pose predictor by modeling rigid 6D head movements with a motion-aware recurrent neural network (RNN). In this way, the predicted head poses act as the low-frequency holistic movements of a talking head, thus allowing our latter network to focus on detailed facial movement generation. To depict the entire image motions arising from audio, we exploit a keypoint based dense motion field representation. Then, we develop a motion field generator to produce the dense motion fields from input audio, head poses, and a reference image. As this keypoint based representation models the motions of facial regions, head, and backgrounds integrally, our method can better constrain the spatial and temporal consistency of the generated videos. Finally, an image generation network is employed to render photo-realistic talking-head videos from the estimated keypoint based motion fields and the input reference image. Extensive experiments demonstrate that our method produces videos with plausible head motions, synchronized facial expressions, and stable backgrounds and outperforms the state-of-the-art.
Audio2Head: Audio-driven One-shot Talking-head Generation with Natural Head Motion
— AK (@ak92501) July 21, 2021
pdf: https://t.co/Tld08wW4HW
abs: https://t.co/4Ul9BllsgC pic.twitter.com/48ATH2LMSP
17. ReSSL: Relational Self-Supervised Learning with Weak Augmentation
Mingkai Zheng, Shan You, Fei Wang, Chen Qian, Changshui Zhang, Xiaogang Wang, Chang Xu
Self-supervised Learning (SSL) including the mainstream contrastive learning has achieved great success in learning visual representations without data annotations. However, most of methods mainly focus on the instance level information (\ie, the different augmented images of the same instance should have the same feature or cluster into the same class), but there is a lack of attention on the relationships between different instances. In this paper, we introduced a novel SSL paradigm, which we term as relational self-supervised learning (ReSSL) framework that learns representations by modeling the relationship between different instances. Specifically, our proposed method employs sharpened distribution of pairwise similarities among different instances as \textit{relation} metric, which is thus utilized to match the feature embeddings of different augmentations. Moreover, to boost the performance, we argue that weak augmentations matter to represent a more reliable relation, and leverage momentum strategy for practical efficiency. Experimental results show that our proposed ReSSL significantly outperforms the previous state-of-the-art algorithms in terms of both performance and training efficiency. Code is available at \url{https://github.com/KyleZheng1997/ReSSL}.
ReSSL: Relational Self-Supervised Learning with
— phalanx (@ZFPhalanx) July 21, 2021
Weak Augmentationhttps://t.co/UapBfA4KCc
インスタンス間の関係性をモデル化して表現を学習するRelational SSLの提案。面白い。 pic.twitter.com/l6twFvhoCI