1. Open is not forever: a study of vanished open access journals
Mikael Laakso, Lisa Matthias, Najko Jahn
The preservation of the scholarly record has been a point of concern since the beginning of knowledge production. With print publications, the responsibility rested primarily with librarians, but the shift towards digital publishing and, in particular, the introduction of open access (OA) have caused ambiguity and complexity. Consequently, the long-term accessibility of journals is not always guaranteed, and they can even disappear from the web completely. The purpose of this exploratory study is to systematically study the phenomenon of vanished journals, something that has not been done before. For the analysis, we consulted several major bibliographic indexes, such as Scopus, Ulrichsweb, and the Directory of Open Access Journals, and traced the journals through the Internet Archive’s Wayback Machine. We found 192 OA journals that vanished from the web between 2000 and 2019, spanning all major research disciplines and geographic regions of the world. Our results raise vital concern for the integrity of the scholarly record and highlight the urgency to take collaborative action to ensure continued access and prevent the loss of more scholarly knowledge. We encourage those interested in the phenomenon of vanished journals to use the public dataset for their own research.
So, @mikaellaakso, @najkoja, and I have a new preprint out, where we look at vanished journals. We found 192 journals that were once openly available but have since vanished, and that this affects all disciplines and geographical regions. However.... https://t.co/nsHoOLnvST pic.twitter.com/DhXKFL2xnt
— Lisa Matthias (@l_matthia) August 28, 2020
"We found 192 #openaccess journals that vanished from the web between 2000 and 2019, spanning all major research disciplines and geographic regions of the world."https://t.co/p0eiZW32Vz
— Peter Suber (@petersuber) August 28, 2020
Comment: OA needs preservation, just as preservation needs OA.
“We found 192 OA journals that vanished from the web between 2000 and 2019, spanning all major research disciplines and geographic regions of the world.”https://t.co/B4tuRH5Gke pic.twitter.com/euR2ZIUa7P
— Retraction Watch (@RetractionWatch) August 30, 2020
Be sure to look at this important new scholarship abt hundreds (if not thousands) of #OpenAccess Journals disappearing. @internetarchive has been ramping up our efforts to help fill these gaps. More on this soon.https://t.co/y9Rjmiw3wX https://t.co/v5RsIFprJA
— Internet Archive (@internetarchive) August 30, 2020
2. CenterHMR: a Bottom-up Single-shot Method for Multi-person 3D Mesh Recovery from a Single Image
Yu Sun, Qian Bao, Wu Liu, Yili Fu, Tao Mei
In this paper, we propose a method to recover multi-person 3D mesh from a single image. Existing methods follow a multi-stage detection-based pipeline, where the 3D mesh of each person is regressed from the cropped image patch. They have to suffer from the high complexity of the multi-stage process and the ambiguity of the image-level features. For example, it is hard for them to estimate multi-person 3D mesh from the inseparable crowded cases. Instead, in this paper, we present a novel bottom-up single-shot method, Center-based Human Mesh Recovery network (CenterHMR). The model is trained to simultaneously predict two maps, which represent the location of each human body center and the corresponding parameter vector of 3D human mesh at each center. This explicit center-based representation guarantees the pixel-level feature encoding. Besides, the 3D mesh result of each person is estimated from the features centered at the visible body parts, which improves the robustness under occlusion. CenterHMR surpasses previous methods on multi-person in-the-wild benchmark 3DPW and occlusion dataset 3DOH50K. Besides, CenterHMR has achieved a 2-nd place on ECCV 2020 3DPW Challenge. The code is released on https://github.com/Arthur151/CenterHMR.
CenterHMR: a Bottom-up Single-shot Method for Multi-person 3D Mesh Recovery from a Single Image
— AK (@ak92501) August 28, 2020
pdf: https://t.co/0MGvbOXhAb
abs: https://t.co/OmG6Ad0smy
github: https://t.co/1mW2mh9Y5V pic.twitter.com/xqHyL3xQke
3. AMBERT: A Pre-trained Language Model with Multi-Grained Tokenization
Xinsong Zhang, Hang Li
Pre-trained language models such as BERT have exhibited remarkable performances in many tasks in natural language understanding (NLU). The tokens in the models are usually fine-grained in the sense that for languages like English they are words or sub-words and for languages like Chinese they are characters. In English, for example, there are multi-word expressions which form natural lexical units and thus the use of coarse-grained tokenization also appears to be reasonable. In fact, both fine-grained and coarse-grained tokenizations have advantages and disadvantages for learning of pre-trained language models. In this paper, we propose a novel pre-trained language model, referred to as AMBERT (A Multi-grained BERT), on the basis of both fine-grained and coarse-grained tokenizations. For English, AMBERT takes both the sequence of words (fine-grained tokens) and the sequence of phrases (coarse-grained tokens) as input after tokenization, employs one encoder for processing the sequence of words and the other encoder for processing the sequence of the phrases, utilizes shared parameters between the two encoders, and finally creates a sequence of contextualized representations of the words and a sequence of contextualized representations of the phrases. Experiments have been conducted on benchmark datasets for Chinese and English, including CLUE, GLUE, SQuAD and RACE. The results show that AMBERT outperforms the existing best performing models in almost all cases, particularly the improvements are significant for Chinese.
Our new archive paper "AMBERT: A Pre-trained Language Model with Multi-Grained Tokenization". It works better than BERT, Albert, XLNet, etc at CLUE and GLUE. https://t.co/2jdiy0U1fy
— Hang Li (@dr_hang_li) August 28, 2020
4. GPU-accelerating ImageJ Macro image processing workflows using CLIJ
Daniela Vorkel, Robert Haase
This chapter introduces GPU-accelerated image processing in ImageJ/FIJI. The reader is expected to have some pre-existing knowledge of ImageJ Macro programming. Core concepts such as variables, for-loops, and functions are essential. The chapter provides basic guidelines for improved performance in typical image processing workflows. We present in a step-by-step tutorial how to translate a pre-existing ImageJ macro into a GPU-accelerated macro.
Fast, faster, GPU - the preprint for #GPU-accelerated macro processing workflows in @FijiSc is online: https://t.co/1qXal4GmvC.🙃 Big THANKS to my colleague ⭐️@haesleinhuepf⭐️!!! 😎, the mastemind behind the code of #clij! 🪲+🔬+ 🖥️=😃 #Neubias #bioimageanalysis
— Daniela (@happifocus) August 28, 2020
New preprint book chapter on how to use #CLIJ GPU-accelerated image processing. It has made my image processing ~60 times faster. Images that took an hour for me to process now can get done in just under a minute with these easy-to-use commands. https://t.co/ZHoPnbO45r
— Tanner Fadero (@TanFad) August 28, 2020
5. Visual Concept Reasoning Networks
Taesup Kim, Sungwoong Kim, Yoshua Bengio
A split-transform-merge strategy has been broadly used as an architectural constraint in convolutional neural networks for visual recognition tasks. It approximates sparsely connected networks by explicitly defining multiple branches to simultaneously learn representations with different visual concepts or properties. Dependencies or interactions between these representations are typically defined by dense and local operations, however, without any adaptiveness or high-level reasoning. In this work, we propose to exploit this strategy and combine it with our Visual Concept Reasoning Networks (VCRNet) to enable reasoning between high-level visual concepts. We associate each branch with a visual concept and derive a compact concept state by selecting a few local descriptors through an attention module. These concept states are then updated by graph-based interaction and used to adaptively modulate the local descriptors. We describe our proposed model by split-transform-attend-interact-modulate-merge stages, which are implemented by opting for a highly modularized architecture. Extensive experiments on visual recognition tasks such as image classification, semantic segmentation, object detection, scene recognition, and action recognition show that our proposed model, VCRNet, consistently improves the performance by increasing the number of parameters by less than 1%.
Visual Concept Reasoning Networks. #ArtificialIntelligence #DataScience #BigData #Analytics #Python #RStats #TensorFlow #IoT #Java #JavaScript #ReactJS #GoLang #Serverless #Linux #Programmer #DataViz #DataScientists #DeepLearning #MachineLearning #AI https://t.co/DNOm79X8Li pic.twitter.com/Vz13iiVYxV
— Marcus Borba (@marcusborba) August 28, 2020
6. Traces of Class/Cross-Class Structure Pervade Deep Learning Spectra
Vardan Papyan
Numerous researchers recently applied empirical spectral analysis to the study of modern deep learning classifiers. We identify and discuss an important formal class/cross-class structure and show how it lies at the origin of the many visually striking features observed in deepnet spectra, some of which were reported in recent articles, others are unveiled here for the first time. These include spectral outliers, “spikes”, and small but distinct continuous distributions, “bumps”, often seen beyond the edge of a “main bulk”. The significance of the cross-class structure is illustrated in three ways: (i) we prove the ratio of outliers to bulk in the spectrum of the Fisher information matrix is predictive of misclassification, in the context of multinomial logistic regression; (ii) we demonstrate how, gradually with depth, a network is able to separate class-distinctive information from class variability, all while orthogonalizing the class-distinctive information; and (iii) we propose a correction to KFAC, a well-known second-order optimization algorithm for training deepnets.
Traces of Class/Cross-Class Structure Pervade Deep Learning Spectra. #ArtificialIntelligence #DataScience #BigData #Analytics #Python #RStats #TensorFlow #Java #JavaScript #ReactJS #GoLang #Serverless #Linux #Programmer #MachineLearning #DeepLearning #AIhttps://t.co/BAUtP7MoPc pic.twitter.com/DpoS0eBgcq
— Marcus Borba (@marcusborba) August 29, 2020
7. Multi-scale approach for the prediction of atomic scale properties
Andrea Grisafi, Jigyasa Nigam, Michele Ceriotti
- retweets: 14, favorites: 43 (08/31/2020 09:40:32)
- links: abs | pdf
- physics.comp-ph | cond-mat.mtrl-sci | stat.ML
Electronic nearsightedness is one of the fundamental principles governing the behavior of condensed matter and supporting its description in terms of local entities such as chemical bonds. Locality also underlies the tremendous success of machine-learning schemes that predict quantum mechanical observables — such as the cohesive energy, the electron density, or a variety of response properties — as a sum of atom-centred contributions, based on a short-range representation of atomic environments. One of the main shortcomings of these approaches is their inability to capture physical effects, ranging from electrostatic interactions to quantum delocalization, which have a long-range nature. Here we show how to build a multi-scale scheme that combines in the same framework local and non-local information, overcoming such limitations. We show that the simplest version of such features can be put in formal correspondence with a multipole expansion of permanent electrostatics. The data-driven nature of the model construction, however, makes this simple form suitable to tackle also different types of delocalized and collective effects. We present several examples that range from molecular physics, to surface science and biophysics, demonstrating the ability of this multi-scale approach to model interactions driven by electrostatics, polarization and dispersion, as well as the cooperative behavior of dielectric response properties.
Physical interactions are multi-scale, so should be your #machinelearning model. Hot off the #preprint press, exquisite work by Andrea Grisafi and @nccr_marvel #inspirepotentials fellow Jigyasa Nigam will get permanent & polarizable electrostatics, & more! https://t.co/BavPlI0Tjz pic.twitter.com/qiPMj845fA
— cosmo-epfl (@COSMO_EPFL) August 28, 2020