Self-supervised bidirectional transformer models such as BERT have led to dramatic improvements in a wide variety of textual classification tasks.

Mathematical expressions were generated, evaluated and used to train neural network models based on the transformer architecture.

We use the length of the activity vector to represent the probability that the entity exists and its orientation to represent the instantiation parameters.

SOTA for Image Classification on MultiMNIST

We use the length of the activity vector to represent the probability that the entity exists and its orientation to represent the instantiation parameters.

Holographic wave-shaping has found numerous applications across the physical sciences, especially since the development of digital spatial-light modulators (SLMs).

We investigate different down-sampling factors (ratio of pixel in input and output) for the SPN and show that the RNN-SPN model is able to down-sample the input images without deteriorating performance.

In this paper, we introduce the Differentiable Digital Signal Processing (DDSP) library, which enables direct integration of classic signal processing elements with deep learning methods.

Given the ability to directly manipulate image pixels in the digital input space, an adversary can easily generate imperceptible perturbations to fool a Deep Neural Network (DNN) image classifier, as demonstrated in prior work.

ADVERSARIAL ATTACK AUTONOMOUS VEHICLES IMAGE CLASSIFICATION OBJECT DETECTION

Many transformations in deep learning architectures are sparsely connected.

Reconfigurable photonic mesh networks of tunable beamsplitter nodes can linearly transform $N$-dimensional vectors representing input modal amplitudes of light for applications such as energy-efficient machine learning hardware, quantum information processing, and mode demultiplexing.