All Articles

Hot Papers 2020-11-16

1. International expert communities on Twitter become more isolated during the COVID-19 pandemic

Francesco Durazzi, Martin Müller, Marcel Salathé, Daniel Remondini

  • retweets: 676, favorites: 74 (11/17/2020 09:25:55)
  • links: abs | pdf
  • cs.SI

COVID-19 represents the most severe global crisis to date whose public conversation can be studied in real time. To do so, we use a data set of over 350 million tweets and retweets posted by over 26 million English speaking Twitter users from January 13 to June 7, 2020. In characterizing the complex retweet network, we identify several stable communities, and are able to link them to scientific expert groups, national elites, and political actors. We find that scientific expert communities received a disproportionate amount of attention early on during the pandemic, and were leading the discussion at the time. However, as the pandemic unfolded, the attention shifted towards both national elites and political actors, paralleled by the introduction of country-specific containment measures and the growing politicization of the debate. Scientific experts remained present in the discussion, but experienced less reach and a higher degree of segregation and isolation. Overall, the emerging communities are characterized by an increased self-amplification and polarization. This makes it generally harder for information from international health organizations or authorities to reach a broad audience. These results may have implications for information dissemination in future global crises.

2. SHAD3S: : A model to Sketch, Shade and Shadow

Raghav Brahmadesam Venkataramaiyer, Abhishek Joshi, Saisha Narang, Vinay P. Namboodiri

  • retweets: 144, favorites: 38 (11/17/2020 09:25:56)
  • links: abs | pdf
  • cs.CV | cs.GR

Hatching is a common method used by artists to accentuate the third dimension of a sketch, and to illuminate the scene. Our system SHAD3S attempts to compete with a human at hatching generic three-dimensional (3D) shapes, and also tries to assist her in a form exploration exercise. The novelty of our approach lies in the fact that we make no assumptions about the input other than that it represents a 3D shape, and yet, given a contextual information of illumination and texture, we synthesise an accurate hatch pattern over the sketch, without access to 3D or pseudo 3D. In the process, we contribute towards a) a cheap yet effective method to synthesise a sufficiently large high fidelity dataset, pertinent to task; b) creating a pipeline with conditional generative adversarial network (CGAN); and c) creating an interactive utility with GIMP, that is a tool for artists to engage with automated hatching or a form-exploration exercise. User evaluation of the tool suggests that the model performance does generalise satisfactorily over diverse input, both in terms of style as well as shape. A simple comparison of inception scores suggest that the generated distribution is as diverse as the ground truth.

3. Learning Latent Representations to Influence Multi-Agent Interaction

Annie Xie, Dylan P. Losey, Ryan Tolsma, Chelsea Finn, Dorsa Sadigh

Seamlessly interacting with humans or robots is hard because these agents are non-stationary. They update their policy in response to the ego agent’s behavior, and the ego agent must anticipate these changes to co-adapt. Inspired by humans, we recognize that robots do not need to explicitly model every low-level action another agent will make; instead, we can capture the latent strategy of other agents through high-level representations. We propose a reinforcement learning-based framework for learning latent representations of an agent’s policy, where the ego agent identifies the relationship between its behavior and the other agent’s future strategy. The ego agent then leverages these latent dynamics to influence the other agent, purposely guiding them towards policies suitable for co-adaptation. Across several simulated domains and a real-world air hockey game, our approach outperforms the alternatives and learns to influence the other agent.

4. Continual Learning with Deep Artificial Neurons

Blake Camp, Jaya Krishna Mandivarapu, Rolando Estrada

  • retweets: 90, favorites: 44 (11/17/2020 09:25:56)
  • links: abs | pdf
  • cs.AI

Neurons in real brains are enormously complex computational units. Among other things, they’re responsible for transforming inbound electro-chemical vectors into outbound action potentials, updating the strengths of intermediate synapses, regulating their own internal states, and modulating the behavior of other nearby neurons. One could argue that these cells are the only things exhibiting any semblance of real intelligence. It is odd, therefore, that the machine learning community has, for so long, relied upon the assumption that this complexity can be reduced to a simple sum and fire operation. We ask, might there be some benefit to substantially increasing the computational power of individual neurons in artificial systems? To answer this question, we introduce Deep Artificial Neurons (DANs), which are themselves realized as deep neural networks. Conceptually, we embed DANs inside each node of a traditional neural network, and we connect these neurons at multiple synaptic sites, thereby vectorizing the connections between pairs of cells. We demonstrate that it is possible to meta-learn a single parameter vector, which we dub a neuronal phenotype, shared by all DANs in the network, which facilitates a meta-objective during deployment. Here, we isolate continual learning as our meta-objective, and we show that a suitable neuronal phenotype can endow a single network with an innate ability to update its synapses with minimal forgetting, using standard backpropagation, without experience replay, nor separate wake/sleep phases. We demonstrate this ability on sequential non-linear regression tasks.

5. DeepMind Lab2D

Charles Beattie, Thomas Köppe, Edgar A. Duéñez-Guzmán, Joel Z. Leibo

  • retweets: 30, favorites: 25 (11/17/2020 09:25:56)
  • links: abs | pdf
  • cs.AI

We present DeepMind Lab2D, a scalable environment simulator for artificial intelligence research that facilitates researcher-led experimentation with environment design. DeepMind Lab2D was built with the specific needs of multi-agent deep reinforcement learning researchers in mind, but it may also be useful beyond that particular subfield.

6. Deep Reinforcement Learning of Transition States

Jun Zhang, Yao-Kun Lei, Zhen Zhang, Xu Han, Maodong Li, Lijiang Yang, Yi Isaac Yang, Yi Qin Gao

Combining reinforcement learning (RL) and molecular dynamics (MD) simulations, we propose a machine-learning approach (RL^\ddag) to automatically unravel chemical reaction mechanisms. In RL^\ddag, locating the transition state of a chemical reaction is formulated as a game, where a virtual player is trained to shoot simulation trajectories connecting the reactant and product. The player utilizes two functions, one for value estimation and the other for policy making, to iteratively improve the chance of winning this game. We can directly interpret the reaction mechanism according to the value function. Meanwhile, the policy function enables efficient sampling of the transition paths, which can be further used to analyze the reaction dynamics and kinetics. Through multiple experiments, we show that RL{\ddag} can be trained tabula rasa hence allows us to reveal chemical reaction mechanisms with minimal subjective biases.