1. Machine learning accelerated computational fluid dynamics
Dmitrii Kochkov, Jamie A. Smith, Ayya Alieva, Qing Wang, Michael P. Brenner, Stephan Hoyer
- retweets: 9526, favorites: 14 (02/03/2021 10:20:09)
- links: abs | pdf
- physics.flu-dyn | cs.LG
Numerical simulation of fluids plays an essential role in modeling many physical phenomena, such as weather, climate, aerodynamics and plasma physics. Fluids are well described by the Navier-Stokes equations, but solving these equations at scale remains daunting, limited by the computational cost of resolving the smallest spatiotemporal features. This leads to unfavorable trade-offs between accuracy and tractability. Here we use end-to-end deep learning to improve approximations inside computational fluid dynamics for modeling two-dimensional turbulent flows. For both direct numerical simulation of turbulence and large eddy simulation, our results are as accurate as baseline solvers with 8-10x finer resolution in each spatial dimension, resulting in 40-80x fold computational speedups. Our method remains stable during long simulations, and generalizes to forcing functions and Reynolds numbers outside of the flows where it is trained, in contrast to black box machine learning approaches. Our approach exemplifies how scientific computing can leverage machine learning and hardware accelerators to improve simulations without sacrificing accuracy or generalization.
1/2
— Dmitrii Kochkov (@dkochkov1) February 2, 2021
Excited to share "Machine learning accelerated computational fluid dynamics"https://t.co/8rXhLGTVZC
We use ML inside a CFD simulator to advance the accuracy/speed Pareto frontier
with/
Jamie A. Smith
Ayya Alieva
Qing Wang
Michael P. Brenner@shoyer pic.twitter.com/RcQDfEAqkH
2. Can We Automate Scientific Reviewing?
Weizhe Yuan, Pengfei Liu, Graham Neubig
The rapid development of science and technology has been accompanied by an exponential growth in peer-reviewed scientific publications. At the same time, the review of each paper is a laborious process that must be carried out by subject matter experts. Thus, providing high-quality reviews of this growing number of papers is a significant challenge. In this work, we ask the question “can we automate scientific reviewing?”, discussing the possibility of using state-of-the-art natural language processing (NLP) models to generate first-pass peer reviews for scientific papers. Arguably the most difficult part of this is defining what a “good” review is in the first place, so we first discuss possible evaluation measures for such reviews. We then collect a dataset of papers in the machine learning domain, annotate them with different aspects of content covered in each review, and train targeted summarization models that take in papers to generate reviews. Comprehensive experimental results show that system-generated reviews tend to touch upon more aspects of the paper than human-written reviews, but the generated text can suffer from lower constructiveness for all aspects except the explanation of the core ideas of the papers, which are largely factually correct. We finally summarize eight challenges in the pursuit of a good review generation system together with potential solutions, which, hopefully, will inspire more future research on this subject. We make all code, and the dataset publicly available: https://github.com/neulab/ReviewAdvisor, as well as a ReviewAdvisor system: http://review.nlpedia.ai/.
Here's a scary thought:
— Peyman Milanfar (@docmilanfar) February 2, 2021
"Can We Automate Scientific Reviewing?"https://t.co/TAr9Dpwklg
Try it for yourself:https://t.co/OWVzZY3G8g
For the love of all that is good, please don't use this hastily in your own reviewing work. pic.twitter.com/FGtP8OzIcX
3. Can Small and Synthetic Benchmarks Drive Modeling Innovation? A Retrospective Study of Question Answering Modeling Approaches
Nelson F. Liu, Tony Lee, Robin Jia, Percy Liang
Datasets are not only resources for training accurate, deployable systems, but are also benchmarks for developing new modeling approaches. While large, natural datasets are necessary for training accurate systems, are they necessary for driving modeling innovation? For example, while the popular SQuAD question answering benchmark has driven the development of new modeling approaches, could synthetic or smaller benchmarks have led to similar innovations? This counterfactual question is impossible to answer, but we can study a necessary condition: the ability for a benchmark to recapitulate findings made on SQuAD. We conduct a retrospective study of 20 SQuAD modeling approaches, investigating how well 32 existing and synthesized benchmarks concur with SQuAD — i.e., do they rank the approaches similarly? We carefully construct small, targeted synthetic benchmarks that do not resemble natural language, yet have high concurrence with SQuAD, demonstrating that naturalness and size are not necessary for reflecting historical modeling improvements on SQuAD. Our results raise the intriguing possibility that small and carefully designed synthetic benchmarks may be useful for driving the development of new modeling approaches.
Large, natural datasets are invaluable for training accurate, deployable systems, but are they required for driving modeling innovation? Can we use small, synthetic benchmarks instead? Our new paper asks this: https://t.co/b9WYYonQxi
— Nelson Liu (@nelsonfliu) February 2, 2021
w/ Tony Lee, @robinomial, @percyliang
(1/8) pic.twitter.com/sdhRk5UT5L
4. Speech Recognition by Simply Fine-tuning BERT
Wen-Chin Huang, Chia-Hua Wu, Shang-Bao Luo, Kuan-Yu Chen, Hsin-Min Wang, Tomoki Toda
We propose a simple method for automatic speech recognition (ASR) by fine-tuning BERT, which is a language model (LM) trained on large-scale unlabeled text data and can generate rich contextual representations. Our assumption is that given a history context sequence, a powerful LM can narrow the range of possible choices and the speech signal can be used as a simple clue. Hence, comparing to conventional ASR systems that train a powerful acoustic model (AM) from scratch, we believe that speech recognition is possible by simply fine-tuning a BERT model. As an initial study, we demonstrate the effectiveness of the proposed idea on the AISHELL dataset and show that stacking a very simple AM on top of BERT can yield reasonable performance.
Speech Recognition by Simply Fine-tuning BERT
— AK (@ak92501) February 2, 2021
pdf: https://t.co/2kit83mnj9
abs: https://t.co/qyOosTp8Ey pic.twitter.com/brIQfVxWim
5. About Face: A Survey of Facial Recognition Evaluation
Inioluwa Deborah Raji, Genevieve Fried
We survey over 100 face datasets constructed between 1976 to 2019 of 145 million images of over 17 million subjects from a range of sources, demographics and conditions. Our historical survey reveals that these datasets are contextually informed, shaped by changes in political motivations, technological capability and current norms. We discuss how such influences mask specific practices (some of which may actually be harmful or otherwise problematic) and make a case for the explicit communication of such details in order to establish a more grounded understanding of the technology’s function in the real world.
A long overdue pre-print is finally out today!📣📣
— Deb Raji (@rajiinio) February 2, 2021
Me & @genmaicha____ wrote of the horrors we find looking through over 100 face datasets with 145 million images of over 17 million subjects (clues: lots of children & Mexican VISAs). https://t.co/WPQ9F1F9tI pic.twitter.com/cugBg6DeIT
6. Sparsity in Deep Learning: Pruning and growth for efficient inference and training in neural networks
Torsten Hoefler, Dan Alistarh, Tal Ben-Nun, Nikoli Dryden, Alexandra Peste
- retweets: 730, favorites: 92 (02/03/2021 10:20:10)
- links: abs | pdf
- cs.LG | cs.AI | cs.AR | cs.CV | cs.NE
The growing energy and performance costs of deep learning have driven the community to reduce the size of neural networks by selectively pruning components. Similarly to their biological counterparts, sparse networks generalize just as well, if not better than, the original dense networks. Sparsity can reduce the memory footprint of regular networks to fit mobile devices, as well as shorten training time for ever growing networks. In this paper, we survey prior work on sparsity in deep learning and provide an extensive tutorial of sparsification for both inference and training. We describe approaches to remove and add elements of neural networks, different training strategies to achieve model sparsity, and mechanisms to exploit sparsity in practice. Our work distills ideas from more than 300 research papers and provides guidance to practitioners who wish to utilize sparsity today, as well as to researchers whose goal is to push the frontier forward. We include the necessary background on mathematical methods in sparsification, describe phenomena such as early structure adaptation, the intricate relations between sparsity and the training process, and show techniques for achieving acceleration on real hardware. We also define a metric of pruned parameter efficiency that could serve as a baseline for comparison of different sparse networks. We close by speculating on how sparsity can improve future workloads and outline major open problems in the field.
The future of #DeepLearning is sparse! See our overview of the field and upcoming opportunities for how to gain 10-100x performance to fuel the next #AI revolution. #HPC techniques will be key as large-scale training is #supercomputing.https://t.co/Pji3zVk2kc#MachineLearning pic.twitter.com/jDaiiGzoTt
— Torsten Hoefler (@thoefler) February 2, 2021
7. Measuring and Improving Consistency in Pretrained Language Models
Yanai Elazar, Nora Kassner, Shauli Ravfogel, Abhilasha Ravichander, Eduard Hovy, Hinrich Schütze, Yoav Goldberg
Consistency of a model — that is, the invariance of its behavior under meaning-preserving alternations in its input — is a highly desirable property in natural language processing. In this paper we study the question: Are Pretrained Language Models (PLMs) consistent with respect to factual knowledge? To this end, we create ParaRel, a high-quality resource of cloze-style query English paraphrases. It contains a total of 328 paraphrases for thirty-eight relations. Using ParaRel, we show that the consistency of all PLMs we experiment with is poor — though with high variance between relations. Our analysis of the representational spaces of PLMs suggests that they have a poor structure and are currently not suitable for representing knowledge in a robust way. Finally, we propose a method for improving model consistency and experimentally demonstrate its effectiveness.
Are our Language Models consistent? Apparently not!
— lazary (@yanaiela) February 2, 2021
Our new paper quantifies that: https://t.co/yaOLpbreFW
w/ @KassnerNora, @ravfogel, @Lasha1608, Ed Hovy, @HinrichSchuetze, and @yoavgo pic.twitter.com/7mb2KhZxFE
8. Video Transformer Network
Daniel Neimark, Omri Bar, Maya Zohar, Dotan Asselmann
This paper presents VTN, a transformer-based framework for video recognition. Inspired by recent developments in vision transformers, we ditch the standard approach in video action recognition that relies on 3D ConvNets and introduce a method that classifies actions by attending to the entire video sequence information. Our approach is generic and builds on top of any given 2D spatial network. In terms of wall runtime, it trains faster and runs faster during inference while maintaining competitive accuracy compared to other state-of-the-art methods. It enables whole video analysis, via a single end-to-end pass, while requiring fewer GFLOPs. We report competitive results on Kinetics-400 and present an ablation study of VTN properties and the trade-off between accuracy and inference speed. We hope our approach will serve as a new baseline and start a fresh line of research in the video recognition domain. Code and models will be available soon.
Video Transformer Network
— AK (@ak92501) February 2, 2021
pdf: https://t.co/LPEKgWBHSu
abs: https://t.co/yN319VZu9J pic.twitter.com/fdn91cAUu9
9. Melon Playlist Dataset: a public dataset for audio-based playlist generation and music tagging
Andres Ferraro, Yuntae Kim, Soohyeon Lee, Biho Kim, Namjun Jo, Semi Lim, Suyon Lim, Jungtaek Jang, Sehwan Kim, Xavier Serra, Dmitry Bogdanov
- retweets: 262, favorites: 61 (02/03/2021 10:20:10)
- links: abs | pdf
- cs.SD | cs.IR | cs.LG | cs.MM | eess.AS
One of the main limitations in the field of audio signal processing is the lack of large public datasets with audio representations and high-quality annotations due to restrictions of copyrighted commercial music. We present Melon Playlist Dataset, a public dataset of mel-spectrograms for 649,091tracks and 148,826 associated playlists annotated by 30,652 different tags. All the data is gathered from Melon, a popular Korean streaming service. The dataset is suitable for music information retrieval tasks, in particular, auto-tagging and automatic playlist continuation. Even though the latter can be addressed by collaborative filtering approaches, audio provides opportunities for research on track suggestions and building systems resistant to the cold-start problem, for which we provide a baseline. Moreover, the playlists and the annotations included in the Melon Playlist Dataset make it suitable for metric learning and representation learning.
Happy to announce the release for #ISMIR and #Recsys of Melon Playlist Dataset, including mel-spectrograms for 649,091 tracks and 148,826 playlists. A collaboration between @mtg_upf and @Team_Kakao
— andres ferraro (@andrebola_) February 2, 2021
web: https://t.co/1L0OlZxwlc
ICASSP paper: https://t.co/1DKDyeRfME
10. ObjectAug: Object-level Data Augmentation for Semantic Image Segmentation
Jiawei Zhang, Yanchun Zhang, Xiaowei Xu
Semantic image segmentation aims to obtain object labels with precise boundaries, which usually suffers from overfitting. Recently, various data augmentation strategies like regional dropout and mix strategies have been proposed to address the problem. These strategies have proved to be effective for guiding the model to attend on less discriminative parts. However, current strategies operate at the image level, and objects and the background are coupled. Thus, the boundaries are not well augmented due to the fixed semantic scenario. In this paper, we propose ObjectAug to perform object-level augmentation for semantic image segmentation. ObjectAug first decouples the image into individual objects and the background using the semantic labels. Next, each object is augmented individually with commonly used augmentation methods (e.g., scaling, shifting, and rotation). Then, the black area brought by object augmentation is further restored using image inpainting. Finally, the augmented objects and background are assembled as an augmented image. In this way, the boundaries can be fully explored in the various semantic scenarios. In addition, ObjectAug can support category-aware augmentation that gives various possibilities to objects in each category, and can be easily combined with existing image-level augmentation methods to further boost performance. Comprehensive experiments are conducted on both natural image and medical image datasets. Experiment results demonstrate that our ObjectAug can evidently improve segmentation performance.
ObjectAug: Object-level Data Augmentation for Semantic Image Segmentationhttps://t.co/anmv1PSvcG pic.twitter.com/GYt4agRspz
— phalanx (@ZFPhalanx) February 2, 2021
11. Expressive Neural Voice Cloning
Paarth Neekhara, Shehzeen Hussain, Shlomo Dubnov, Farinaz Koushanfar, Julian McAuley
Voice cloning is the task of learning to synthesize the voice of an unseen speaker from a few samples. While current voice cloning methods achieve promising results in Text-to-Speech (TTS) synthesis for a new voice, these approaches lack the ability to control the expressiveness of synthesized audio. In this work, we propose a controllable voice cloning method that allows fine-grained control over various style aspects of the synthesized speech for an unseen speaker. We achieve this by explicitly conditioning the speech synthesis model on a speaker encoding, pitch contour and latent style tokens during training. Through both quantitative and qualitative evaluations, we show that our framework can be used for various expressive voice cloning tasks using only a few transcribed or untranscribed speech samples for a new speaker. These cloning tasks include style transfer from a reference speech, synthesizing speech directly from text, and fine-grained style control by manipulating the style conditioning variables during inference.
Expressive Neural Voice Cloning
— AK (@ak92501) February 2, 2021
pdf: https://t.co/c62JV3T1tM
abs: https://t.co/2xEc0QdtSO
project page: https://t.co/b5neKCOy2W
demo: https://t.co/32PVuRdvUD pic.twitter.com/lgilaKNwsi
12. Self-Supervised Equivariant Scene Synthesis from Video
Cinjon Resnick, Or Litany, Cosmas Heiß, Hugo Larochelle, Joan Bruna, Kyunghyun Cho
We propose a self-supervised framework to learn scene representations from video that are automatically delineated into background, characters, and their animations. Our method capitalizes on moving characters being equivariant with respect to their transformation across frames and the background being constant with respect to that same transformation. After training, we can manipulate image encodings in real time to create unseen combinations of the delineated components. As far as we know, we are the first method to perform unsupervised extraction and synthesis of interpretable background, character, and animation. We demonstrate results on three datasets: Moving MNIST with backgrounds, 2D video game sprites, and Fashion Modeling.
"Self-supervised Equivariant Scene Synthesis from Video" (https://t.co/KX0MoGP7Ms). In stop motion animation, each character moves one affine step at a time. Can we learn the transformation, the background, and the character encoding simultaneously ... without supervision?
— Cinjon Resnick (@cinjoncin) February 2, 2021
Self-Supervised Equivariant Scene Synthesis from Video
— AK (@ak92501) February 2, 2021
pdf: https://t.co/tDeZBkgVv8
abs: https://t.co/u5MnGbxKn9 pic.twitter.com/qbF6Cbhnfb
13. The effect of differential victim crime reporting on predictive policing systems
Nil-Jana Akpinar, Alexandra Chouldechova
Police departments around the world have been experimenting with forms of place-based data-driven proactive policing for over two decades. Modern incarnations of such systems are commonly known as hot spot predictive policing. These systems predict where future crime is likely to concentrate such that police can allocate patrols to these areas and deter crime before it occurs. Previous research on fairness in predictive policing has concentrated on the feedback loops which occur when models are trained on discovered crime data, but has limited implications for models trained on victim crime reporting data. We demonstrate how differential victim crime reporting rates across geographical areas can lead to outcome disparities in common crime hot spot prediction models. Our analysis is based on a simulation patterned after district-level victimization and crime reporting survey data for Bogot’a, Colombia. Our results suggest that differential crime reporting rates can lead to a displacement of predicted hotspots from high crime but low reporting areas to high or medium crime and high reporting areas. This may lead to misallocations both in the form of over-policing and under-policing.
My first @FAccTConference paper with @achould is online!🙂
— Nil-Jana Akpinar (@niljanaakpinar) February 2, 2021
Link: https://t.co/wGlVjgTmtr
We demonstrate how common crime hot spot prediction models can suffer from spatial bias even if trained entirely on victim crime reporting data 🧵👇 pic.twitter.com/uUjnN6EeVG
14. Evaluating Large-Vocabulary Object Detectors: The Devil is in the Details
Achal Dave, Piotr Dollár, Deva Ramanan, Alexander Kirillov, Ross Girshick
By design, average precision (AP) for object detection aims to treat all classes independently: AP is computed independently per category and averaged. On the one hand, this is desirable as it treats all classes, rare to frequent, equally. On the other hand, it ignores cross-category confidence calibration, a key property in real-world use cases. Unfortunately, we find that on imbalanced, large-vocabulary datasets, the default implementation of AP is neither category independent, nor does it directly reward properly calibrated detectors. In fact, we show that the default implementation produces a gameable metric, where a simple, nonsensical re-ranking policy can improve AP by a large margin. To address these limitations, we introduce two complementary metrics. First, we present a simple fix to the default AP implementation, ensuring that it is truly independent across categories as originally intended. We benchmark recent advances in large-vocabulary detection and find that many reported gains do not translate to improvements under our new per-class independent evaluation, suggesting recent improvements may arise from difficult to interpret changes to cross-category rankings. Given the importance of reliably benchmarking cross-category rankings, we consider a pooled version of AP (AP-pool) that rewards properly calibrated detectors by directly comparing cross-category rankings. Finally, we revisit classical approaches for calibration and find that explicitly calibrating detectors improves state-of-the-art on AP-pool by 1.7 points.
Rossさん達がAP計算のハック要素を指摘。実装上の都合で画像あたりの検出数に制限があるせいで、どの検出結果を残すかを操作して平均APを上げられる。頻出クラスは信頼度が高くても削除し、逆に信頼度が低いレアクラスを検出結果に含めるような不自然なやり方でAPが上がるhttps://t.co/2bbBzm7M1k pic.twitter.com/k6WGe0TKdS
— Kazuyuki Miyazawa (@kzykmyzw) February 2, 2021
15. Neural 3D Clothes Retargeting from a Single Image
Jae Shin Yoon, Kihwan Kim, Jan Kautz, Hyun Soo Park
In this paper, we present a method of clothes retargeting; generating the potential poses and deformations of a given 3D clothing template model to fit onto a person in a single RGB image. The problem is fundamentally ill-posed as attaining the ground truth data is impossible, i.e., images of people wearing the different 3D clothing template model at exact same pose. We address this challenge by utilizing large-scale synthetic data generated from physical simulation, allowing us to map 2D dense body pose to 3D clothing deformation. With the simulated data, we propose a semi-supervised learning framework that validates the physical plausibility of the 3D deformation by matching with the prescribed body-to-cloth contact points and clothing silhouette to fit onto the unlabeled real images. A new neural clothes retargeting network (CRNet) is designed to integrate the semi-supervised retargeting task in an end-to-end fashion. In our evaluation, we show that our method can predict the realistic 3D pose and deformation field needed for retargeting clothes models in real-world examples.
Neural 3D Clothes Retargeting from a Single Image
— AK (@ak92501) February 2, 2021
pdf: https://t.co/wX8yzHFfaB
abs: https://t.co/amFinRZT5L pic.twitter.com/vUymIDdFTk
16. Modeling how social network algorithms can influence opinion polarization
Henrique F. de Arruda, Felipe M. Cardoso, Guilherme F. de Arruda, Alexis R. Hernández, Luciano da F. Costa, Yamir Moreno
Among different aspects of social networks, dynamics have been proposed to simulate how opinions can be transmitted. In this study, we propose a model that simulates the communication in an online social network, in which the posts are created from external information. We considered the nodes and edges of a network as users and their friendship, respectively. A real number is associated with each user representing its opinion. The dynamics starts with a user that has contact with a random opinion, and, according to a given probability function, this individual can post this opinion. This step is henceforth called post transmission. In the next step, called post distribution, another probability function is employed to select the user’s friends that could see the post. Post transmission and distribution represent the user and the social network algorithm, respectively. If an individual has contact with a post, its opinion can be attracted or repulsed. Furthermore, individuals that are repulsed can change their friendship through a rewiring. These steps are executed various times until the dynamics converge. Several impressive results were obtained, which include the formation of scenarios of polarization and consensus of opinions. In the case of echo chambers, the possibility of rewiring probability is found to be decisive. However, for particular network topologies, with a well-defined community structure, this effect can also happen. All in all, the results indicate that the post distribution strategy is crucial to mitigate or promote polarization.
Our new preprint, "Modeling how social
— Henrique F. de Arruda (@hfarruda) February 2, 2021
network algorithms can influence opinion polarization", is out. https://t.co/9YcqVryMnf @fmacielcardoso @GuiFdeArruda @er_chechi @LdaFCosta @cosnet_bifi
17. High Fidelity Speech Regeneration with Application to Speech Enhancement
Adam Polyak, Lior Wolf, Yossi Adi, Ori Kabeli, Yaniv Taigman
Speech enhancement has seen great improvement in recent years mainly through contributions in denoising, speaker separation, and dereverberation methods that mostly deal with environmental effects on vocal audio. To enhance speech beyond the limitations of the original signal, we take a regeneration approach, in which we recreate the speech from its essence, including the semi-recognized speech, prosody features, and identity. We propose a wav-to-wav generative model for speech that can generate 24khz speech in a real-time manner and which utilizes a compact speech representation, composed of ASR and identity features, to achieve a higher level of intelligibility. Inspired by voice conversion methods, we train to augment the speech characteristics while preserving the identity of the source using an auxiliary identity network. Perceptual acoustic metrics and subjective tests show that the method obtains valuable improvements over recent baselines.
High Fidelity Speech Regeneration with Application to Speech Enhancement
— AK (@ak92501) February 2, 2021
pdf: https://t.co/vgkNbTPbtJ
abs: https://t.co/FNlnR7emyv
project page: https://t.co/1Bhe8iX6pQ pic.twitter.com/7GstbrPCg6
18. Beyond the Command: Feminist STS Research and Critical Issues for the Design of Social Machines
Kelly B. Wagman, Lisa Parks
Machines, from artificially intelligent digital assistants to embodied robots, are becoming more pervasive in everyday life. Drawing on feminist science and technology studies (STS) perspectives, we demonstrate how machine designers are not just crafting neutral objects, but relationships between machines and humans that are entangled in human social issues such as gender and power dynamics. Thus, in order to create a more ethical and just future, the dominant assumptions currently underpinning the design of these human-machine relations must be challenged and reoriented toward relations of justice and inclusivity. This paper contributes the “social machine” as a model for technology designers who seek to recognize the importance, diversity and complexity of the social in their work, and to engage with the agential power of machines. In our model, the social machine is imagined as a potentially equitable relationship partner that has agency and as an “other” that is distinct from, yet related to, humans, objects, and animals. We critically examine and contrast our model with tendencies in robotics that consider robots as tools, human companions, animals or creatures, and/or slaves. In doing so, we demonstrate ingrained dominant assumptions about human-machine relations and reveal the challenges of radical thinking in the social machine design space. Finally, we present two design challenges based on non-anthropomorphic figuration and mutuality, and call for experimentation, unlearning dominant tendencies, and reimagining of sociotechnical futures.
Excited to share that my paper with Lisa Parks on how designers & technologists can conceptualize human-machine relations in a way that is equitable/just has been accepted to #CSCW21! Really poured my heart and soul into this one... https://t.co/P9TyGBbA8x pic.twitter.com/bksngozXgc
— Kelly Wagman (@kellybwagman) February 2, 2021
19. Can Machine Learning Help in Solving Cargo Capacity Management Booking Control Problems?
Justin Dumouchelle, Emma Frejinger, Andrea Lodi
Revenue management is important for carriers (e.g., airlines and railroads). In this paper, we focus on cargo capacity management which has received less attention in the literature than its passenger counterpart. More precisely, we focus on the problem of controlling booking accept/reject decisions: Given a limited capacity, accept a booking request or reject it to reserve capacity for future bookings with potentially higher revenue. We formulate the problem as a finite-horizon stochastic dynamic program. The cost of fulfilling the accepted bookings, incurred at the end of the horizon, depends on the packing and routing of the cargo. This is a computationally challenging aspect as the latter are solutions to an operational decision-making problem, in our application a vehicle routing problem (VRP). Seeking a balance between online and offline computation, we propose to train a predictor of the solution costs to the VRPs using supervised learning. In turn, we use the predictions online in approximate dynamic programming and reinforcement learning algorithms to solve the booking control problem. We compare the results to an existing approach in the literature and show that we are able to obtain control policies that provide increased profit at a reduced evaluation time. This is achieved thanks to accurate approximation of the operational costs and negligible computing time in comparison to solving the VRPs.
20. Machine Translationese: Effects of Algorithmic Bias on Linguistic Complexity in Machine Translation
Eva Vanmassenhove, Dimitar Shterionov, Matthew Gwilliam
Recent studies in the field of Machine Translation (MT) and Natural Language Processing (NLP) have shown that existing models amplify biases observed in the training data. The amplification of biases in language technology has mainly been examined with respect to specific phenomena, such as gender bias. In this work, we go beyond the study of gender in MT and investigate how bias amplification might affect language in a broader sense. We hypothesize that the ‘algorithmic bias’, i.e. an exacerbation of frequently observed patterns in combination with a loss of less frequent ones, not only exacerbates societal biases present in current datasets but could also lead to an artificially impoverished language: ‘machine translationese’. We assess the linguistic richness (on a lexical and morphological level) of translations created by different data-driven MT paradigms - phrase-based statistical (PB-SMT) and neural MT (NMT). Our experiments show that there is a loss of lexical and morphological richness in the translations produced by all investigated MT paradigms for two language pairs (EN<=>FR and EN<=>ES).
Our latest paper on algorithmic bias, machine translationese and lexical and morphological richness is now available (https://t.co/9lzCBDQiuw) and will soon be presented at @eaclmeeting.
— Eva Vanmassenhove (@Evanmassenhove) February 2, 2021
Co-authored with@DShterionov from @TilburgU and Matthew Gwilliam from @UofMaryland.