1. Learning Gradient Fields for Shape Generation
Ruojin Cai, Guandao Yang, Hadar Averbuch-Elor, Zekun Hao, Serge Belongie, Noah Snavely, Bharath Hariharan
In this work, we propose a novel technique to generate shapes from point cloud data. A point cloud can be viewed as samples from a distribution of 3D points whose density is concentrated near the surface of the shape. Point cloud generation thus amounts to moving randomly sampled points to high-density areas. We generate point clouds by performing stochastic gradient ascent on an unnormalized probability density, thereby moving sampled points toward the high-likelihood regions. Our model directly predicts the gradient of the log density field and can be trained with a simple objective adapted from score-based generative models. We show that our method can reach state-of-the-art performance for point cloud auto-encoding and generation, while also allowing for extraction of a high-quality implicit surface. Code is available at https://github.com/RuojinCai/ShapeGF.
Learning Gradient Fields for Shape Generation
— AK (@ak92501) August 18, 2020
pdf: https://t.co/tJHDw8Yh5y
abs: https://t.co/mfplJIIWBd
github: https://t.co/Xc4RhA7IXW
project page: https://t.co/v9IS6W0rPh pic.twitter.com/RXJWQfqNQA
Excited to share our ECCV paper on 3D generation: Learning Gradient Fields for Shape Generation.
— Guandao Yang (@stevenygd) August 18, 2020
Arxiv: https://t.co/wjZvA6I0US
Project page: https://t.co/v0WsHUONt1
Code:https://t.co/ugTa58NxIN
Video: https://t.co/k2FBZthl1G
Long video: https://t.co/d0pJXIlhAx pic.twitter.com/dc3C8Tej18
2. Manticore: A 4096-core RISC-V Chiplet Architecture for Ultra-efficient Floating-point Computing
Florian Zaruba, Fabian Schuiki, Luca Benini
Data-parallel problems, commonly found in data analytics, machine learning, and scientific computing demand ever growing floating-point operations per second under tight area- and energy-efficiency constraints. Application-specific architectures and accelerators, while efficient at a given task, are hard to adjust to algorithmic changes. In this work, we present Manticore, a general-purpose, ultra-efficient, RISC-V, chiplet-based architecture for data-parallel floating-point workloads. We have manufactured a 9 prototype of the chiplet’s computational core in Globalfoundries 22nm FD-SOI process and demonstrate more than 2.5 improvement in energy efficiency on floating-point intensive workloads compared to high performance compute engines (CPUs and GPUs), despite their more advanced FinFET process. The prototype contains two 64-bit, application-class RISC-V Ariane management cores that run a full-fledged Linux OS. The compute capability at high energy and area efficiency is provided by Snitch clusters. Each cluster contains eight small (20kGE) 32-bit integer RISC-V cores, each controlling a large double-precision floating-point unit (120kGE). Each core supports two custom RISC-V ISA extensions: FREP and SSR. The SSR extension elides explicit load and store instructions by encoding them as register reads and writes. The FREP extension mostly decouples the integer core from the FPU by allowing a sequence buffer to issue instructions to the FPU independently. Both extensions allow the tiny, single-issue, integer core to saturate the instruction bandwidth of the FPU and achieve FPU utilization above 90%, with more than 80% of core area dedicated to the FPU.
In HC32, researchers have presented Manticore, a 22FDX general-purpose ultra-efficient #RISCV chiplet-based architecture for data-parallel floating-point workloads.https://t.co/rFdANlxxEq pic.twitter.com/O8IweO8iYm
— Underfox (@Underfox3) August 18, 2020
3. Computational timeline reconstruction of the stories surrounding Trump: Story turbulence, narrative control, and collective chronopathy
P. S. Dodds, J. R. Minot, M. V. Arnold, T. Alshaabi, J. L. Adams, A. J. Reagan, C. M. Danforth
- retweets: 24, favorites: 51 (08/19/2020 09:42:22)
- links: abs | pdf
- physics.soc-ph | cs.SI
Measuring the specific kind, temporal ordering, diversity, and turnover rate of stories surrounding any given subject is essential to developing a complete reckoning of that subject’s historical impact. Here, we use Twitter as a distributed news and opinion aggregation source to identify and track the dynamics of the dominant day-scale stories around Donald Trump, the 45th President of the United States. Working with a data set comprising around 20 billion 1-grams, we first compare each day’s 1-gram and 2-gram usage frequencies to those of a year before, to create day- and week-scale timelines for Trump stories for 2016 onwards. We measure Trump’s narrative control, the extent to which stories have been about Trump or put forward by Trump. We then quantify story turbulence and collective chronopathy — the rate at which a population’s stories for a subject seem to change over time. We show that 2017 was the most turbulent year for Trump, and that story generation slowed dramatically during the COVID-19 pandemic in 2020. Trump story turnover for 2 months during the COVID-19 pandemic was on par with that of 3 days in September 2017. Our methods may be applied to any well-discussed phenomenon, and have potential, in particular, to enable the computational aspects of journalism, history, and biography.
New preprint:
— ComputationlStoryLab (@compstorylab) August 18, 2020
“Computational timeline reconstruction of the stories surrounding Trump: Story turbulence, narrative control, and collective chronopathy”https://t.co/4zAfQmjWlW
P. S. Dodds, J. R. Minot, M. V. Arnold, T. Alshaabi, J. L. Adams, A. J. Reagan, and C. M. Danforth pic.twitter.com/tcxNmiB8Ul
Did the news cycle really launch into warp speed when Trump was elected?
— Chris Danforth (@ChrisDanforth) August 18, 2020
Bigly.
Our latest preprint:https://t.co/6YVG6jEsaQ pic.twitter.com/MGf4Tae56Y
4. Bounds on the Complexity of Approximating Parity
Gregory Rosenthal
circuits are quantum circuits with one-qubit gates and Toffoli gates of arbitrary arity. circuits are circuits of constant depth, and are quantum analogues of circuits. We prove the following: For all and there is a depth- circuit of size that approximates the -qubit parity function to within error on worst-case quantum inputs. Previously it was unknown whether circuits of sublogarithmic depth could approximate parity regardless of size. We introduce a class of “mostly classical” circuits, including a major component of our circuit from the above upper bound, and prove a tight lower bound on the size of low-depth, mostly classical circuits that approximate this component. Arbitrary depth- circuits require at least multi-qubit gates to achieve a approximation of parity. When this nearly matches an easy size upper bound for computing parity exactly. circuits with at most two layers of multi-qubit gates cannot achieve a approximation of parity, even non-cleanly. Previously it was known only that such circuits could not cleanly compute parity exactly for sufficiently large . The proofs use a new normal form for quantum circuits which may be of independent interest, and are based on reductions to the problem of constructing certain generalizations of the cat state which we name “nekomata” after an analogous cat y=okai.
Exciting paper by Gregory Rosenthal (@gregrosent), a PhD student @UofT. Proves new bounds on approximating the parity function with QAC0 circuits, making progress on a long-standing question in quantum circuit complexity. Also, his paper has cool notation. https://t.co/M5cJvRZOlX pic.twitter.com/ViDFplJmV4
— Henry Yuen (@henryquantum) August 18, 2020
5. Crossing The Gap: A Deep Dive into Zero-Shot Sim-to-Real Transfer for Dynamics
Eugene Valassakis, Zihan Ding, Edward Johns
Zero-shot sim-to-real transfer of tasks with complex dynamics is a highly challenging and unsolved problem. A number of solutions have been proposed in recent years, but we have found that many works do not present a thorough evaluation in the real world, or underplay the significant engineering effort and task-specific fine tuning that is required to achieve the published results. In this paper, we dive deeper into the sim-to-real transfer challenge, investigate why this is such a difficult problem, and present objective evaluations of a number of transfer methods across a range of real-world tasks. Surprisingly, we found that a method which simply injects random forces into the simulation performs just as well as more complex methods, such as those which randomise the simulator’s dynamics parameters, or adapt a policy online using recurrent network architectures.
Crossing the Gap: A Deep Dive into Zero-Shot Sim-to-Real Transfer for Dynamicshttps://t.co/SY98chnp5P pic.twitter.com/qDqDoKkpB0
— sim2real (@sim2realAIorg) August 18, 2020
For simple robotics tasks (reach, push, slide) injecting random noise performs as well as more complicated domain randomization: https://t.co/FvfSWyumgn
— Eugene Vinitsky (@EugeneVinitsky) August 18, 2020
Suspicion: all these methods are basically just inducing feedback so additional complexity doesn't help
6. Is Supervised Syntactic Parsing Beneficial for Language Understanding? An Empirical Investigation
Goran Glavaš, Ivan Vulić
Traditional NLP has long held (supervised) syntactic parsing necessary for successful higher-level language understanding. The recent advent of end-to-end neural language learning, self-supervised via language modeling (LM), and its success on a wide range of language understanding tasks, however, questions this belief. In this work, we empirically investigate the usefulness of supervised parsing for semantic language understanding in the context of LM-pretrained transformer networks. Relying on the established fine-tuning paradigm, we first couple a pretrained transformer with a biaffine parsing head, aiming to infuse explicit syntactic knowledge from Universal Dependencies (UD) treebanks into the transformer. We then fine-tune the model for language understanding (LU) tasks and measure the effect of the intermediate parsing training (IPT) on downstream LU performance. Results from both monolingual English and zero-shot language transfer experiments (with intermediate target-language parsing) show that explicit formalized syntax, injected into transformers through intermediate supervised parsing, has very limited and inconsistent effect on downstream LU performance. Our results, coupled with our analysis of transformers’ representation spaces before and after intermediate parsing, make a significant step towards providing answers to an essential question: how (un)availing is supervised parsing for high-level semantic language understanding in the era of large neural models?
New work (w. @licwu): https://t.co/QaJmQRZRnd
— Goran Glavaš (@gg42554) August 18, 2020
We know from recent work that pretrained Transformers implicitly encode some kind of syntax. But does this implicit syntax render formal/explicit syntax (i.e., treebanks and supervised parsing) unnecessary for language understanding?
7. Image Stylization for Robust Features
Iaroslav Melekhov, Gabriel J. Brostow, Juho Kannala, Daniyar Turmukhambetov
Local features that are robust to both viewpoint and appearance changes are crucial for many computer vision tasks. In this work we investigate if photorealistic image stylization improves robustness of local features to not only day-night, but also weather and season variations. We show that image stylization in addition to color augmentation is a powerful method of learning robust features. We evaluate learned features on visual localization benchmarks, outperforming state of the art baseline models despite training without ground-truth 3D correspondences using synthetic homographies only. We use trained feature networks to compete in Long-Term Visual Localization and Map-based Localization for Autonomous Driving challenges achieving competitive scores.
Image Stylization for Robust Features
— Tomasz Malisiewicz (@quantombone) August 18, 2020
TLDR: image stylization, in addition to color augmentation, is a powerful method of learning robust visual features
Creative use of style transfer Awesome work from @iMelekhov and @NianticLabs https://t.co/X0I5f1yZoP#ComputerVision pic.twitter.com/P3rVq2NCUY