1. No Deal: Investigating the Influence of Restricted Access to Elsevier Journals on German Researchers’ Publishing and Citing Behaviours
Nicholas Fraser, Anne Hobert, Najko Jahn, Philipp Mayr, Isabella Peters
In 2014, a union of German research organisations established Projekt DEAL, a national-level project to negotiate licensing agreements with large scientific publishers. Negotiations between DEAL and Elsevier began in 2016, and broke down without a successful agreement in 2018; in this time, around 200 German research institutions cancelled their license agreements with Elsevier, leading Elsevier to restrict journal access at those institutions from July 2018 onwards. We investigated the effect of these access restrictions on researchers’ publishing and citing behaviours from a bibliometric perspective, using a dataset of ~410,000 articles published by researchers at the affected DEAL institutions between 2012-2020. We further investigated these effects with respect to the timing of contract cancellations with Elsevier, research disciplines, collaboration patterns, and article open-access status. We find evidence for a decrease in Elsevier’s market share of articles from DEAL institutions, from a peak of 25.3% in 2015 to 20.6% in 2020, with the largest year-on-year market share decreases occurring in 2019 (-1.1%) and 2020 (-1.6%) following the implementation of access restrictions. We also observe year-on-year decreases in the proportion of citations made from articles published by authors at DEAL institutions to articles in Elsevier journals post-2018, although the decrease is smaller (-0.4% in 2019 and -0.6% in 2020) than changes in publishing volume. We conclude that Elsevier access restrictions have led to some reduced willingness of researchers at DEAL institutions to publish their research in Elsevier journals, but that researchers are not strongly affected in their ability to cite Elsevier articles, with the implication that researchers use a variety of other methods (e.g. interlibrary loans, sharing between colleagues, or “shadow libraries”) to access scientific literature.
Our new preprint is out: "No Deal: Investigating the Influence of Restricted Access to Elsevier Journals on German Researchers' Publishing and Citing Behaviours" https://t.co/dYgbrtcYv7 @Philipp_Mayr @Isabella83 @najkoja @anhobert
— Nick Fraser (@nicholasmfraser) May 26, 2021
2. SiamMOT: Siamese Multi-Object Tracking
Bing Shuai, Andrew Berneshawi, Xinyu Li, Davide Modolo, Joseph Tighe
In this paper, we focus on improving online multi-object tracking (MOT). In particular, we introduce a region-based Siamese Multi-Object Tracking network, which we name SiamMOT. SiamMOT includes a motion model that estimates the instance’s movement between two frames such that detected instances are associated. To explore how the motion modelling affects its tracking capability, we present two variants of Siamese tracker, one that implicitly models motion and one that models it explicitly. We carry out extensive quantitative experiments on three different MOT datasets: MOT17, TAO-person and Caltech Roadside Pedestrians, showing the importance of motion modelling for MOT and the ability of SiamMOT to substantially outperform the state-of-the-art. Finally, SiamMOT also outperforms the winners of ACM MM’20 HiEve Grand Challenge on HiEve dataset. Moreover, SiamMOT is efficient, and it runs at 17 FPS for 720P videos on a single modern GPU. Codes are available in \url{https://github.com/amazon-research/siam-mot}.
SiamMOT: Siamese Multi-Object Tracking
— AK (@ak92501) May 26, 2021
pdf: https://t.co/8Z6KT8ijva
abs: https://t.co/JfL0zZcRTk
github: https://t.co/rWABOKKrWS
a region-based MOT network, detects and associates object instances simultaneously pic.twitter.com/LgeIPmZv9w
3. High-Frequency aware Perceptual Image Enhancement
Hyungmin Roh, Myungjoo Kang
In this paper, we introduce a novel deep neural network suitable for multi-scale analysis and propose efficient model-agnostic methods that help the network extract information from high-frequency domains to reconstruct clearer images. Our model can be applied to multi-scale image enhancement problems including denoising, deblurring and single image super-resolution. Experiments on SIDD, Flickr2K, DIV2K, and REDS datasets show that our method achieves state-of-the-art performance on each task. Furthermore, we show that our model can overcome the over-smoothing problem commonly observed in existing PSNR-oriented methods and generate more natural high-resolution images by applying adversarial training.
High-Frequency aware Perceptual Image Enhancement
— AK (@ak92501) May 26, 2021
pdf: https://t.co/1XZkAYnzrT
abs: https://t.co/sevT4gbmwu pic.twitter.com/zumPyikSzP
4. Focus Attention: Promoting Faithfulness and Diversity in Summarization
Rahul Aralikatte, Shashi Narayan, Joshua Maynez, Sascha Rothe, Ryan McDonald
Professional summaries are written with document-level information, such as the theme of the document, in mind. This is in contrast with most seq2seq decoders which simultaneously learn to focus on salient content, while deciding what to generate, at each decoding step. With the motivation to narrow this gap, we introduce Focus Attention Mechanism, a simple yet effective method to encourage decoders to proactively generate tokens that are similar or topical to the input document. Further, we propose a Focus Sampling method to enable generation of diverse summaries, an area currently understudied in summarization. When evaluated on the BBC extreme summarization task, two state-of-the-art models augmented with Focus Attention generate summaries that are closer to the target and more faithful to their input documents, outperforming their vanilla counterparts on \rouge and multiple faithfulness measures. We also empirically demonstrate that Focus Sampling is more effective in generating diverse and faithful summaries than top- or nucleus sampling-based decoding methods.
Focus Attention: Promoting Faithfulness and Diversity in Summarization
— AK (@ak92501) May 26, 2021
pdf: https://t.co/mlgLzfPzEQ
abs: https://t.co/lKprbPGfc0
a new attention mechanism which dynamically biases the decoder to proactively generate tokens that are topically similar to the input pic.twitter.com/nVIhg5C22Z