Trending Research

Ordered by accumulated GitHub stars in last 3 days
Trending Latest Greatest
1
Card image cap
Visualizing the Loss Landscape of Neural Nets
Neural network training relies on our ability to find "good" minimizers of highly non-convex loss functions. It is well-known that certain network architecture designs (e.g., skip connections) produce loss functions that train easier, and well-chosen training parameters (batch size, learning rate, optimizer) produce minimizers that generalize better.
483
1.99 stars / hour
 Paper  Code
2
Card image cap
ProxylessNAS: Direct Neural Architecture Search on Target Task and Hardware
In this paper, we present \emph{ProxylessNAS} that can \emph{directly} learn the architectures for large-scale target tasks and target hardware platforms. We address the high memory consumption issue of differentiable NAS and reduce the computational cost (GPU hours and GPU memory) to the same level of regular training while still allowing a large candidate set.
128
1.23 stars / hour
 Paper  Code
3
Card image cap
A Closed-form Solution to Photorealistic Image Stylization
Photorealistic image stylization concerns transferring style of a reference photo to a content photo with the constraint that the stylized photo should remain photorealistic. The proposed method consists of a stylization step and a smoothing step.
8,973
0.95 stars / hour
 Paper  Code
4
Card image cap
Collaging on Internal Representations: An Intuitive Approach for Semantic Transfiguration
We present a novel CNN-based image editing method that allows the user to change the semantic information of an image over a user-specified region. Our method makes this possible by combining the idea of manifold projection with spatial conditional batch normalization (sCBN), a version of conditional batch normalization with user-specifiable spatial weight maps.

110
0.61 stars / hour
 Paper  Code
5
Card image cap
agents
TF-Agents is a library for Reinforcement Learning in TensorFlow

49
0.49 stars / hour
 Paper  Code
6
Card image cap
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models, BERT is designed to pre-train deep bidirectional representations by jointly conditioning on both left and right context in all layers.
9,070
0.45 stars / hour
 Paper  Code
7
Card image cap
Self-Attention Generative Adversarial Networks
In this paper, we propose the Self-Attention Generative Adversarial Network (SAGAN) which allows attention-driven, long-range dependency modeling for image generation tasks. Traditional convolutional GANs generate high-resolution details as a function of only spatially local points in lower-resolution feature maps.
4,444
0.36 stars / hour
 Paper  Code
8
Card image cap
Progressive Growing of GANs for Improved Quality, Stability, and Variation
We describe a new training methodology for generative adversarial networks. The key idea is to grow both the generator and discriminator progressively: starting from a low resolution, we add new layers that model increasingly fine details as training progresses.
4,444
0.36 stars / hour
 Paper  Code
9
Card image cap
GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium
Generative Adversarial Networks (GANs) excel at creating realistic images with complex models for which maximum likelihood is infeasible. However, the convergence of GAN training has still not been proved.
4,444
0.36 stars / hour
 Paper  Code
10
Card image cap
Detecting Text in Natural Image with Connectionist Text Proposal Network
We propose a novel Connectionist Text Proposal Network (CTPN) that accurately localizes text lines in natural image. The sequential proposals are naturally connected by a recurrent neural network, which is seamlessly incorporated into the convolutional network, resulting in an end-to-end trainable model.

91
0.36 stars / hour
 Paper  Code
11
Card image cap
Clebsch-Gordan Nets: a Fully Fourier Space Spherical Convolutional Neural Network
Recent work by Cohen \emph{et al.} has achieved state-of-the-art results for learning spherical images in a rotation invariant way by using ideas from group representation theory and noncommutative harmonic analysis. In this paper we propose a generalization of this work that generally exhibits improved performace, but from an implementation point of view is actually simpler.
19
0.34 stars / hour
 Paper  Code
12
TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems
TensorFlow is an interface for expressing machine learning algorithms, and an implementation for executing such algorithms. A computation expressed using TensorFlow can be executed with little or no change on a wide variety of heterogeneous systems, ranging from mobile devices such as phones and tablets up to large-scale distributed systems of hundreds of machines and thousands of computational devices such as GPU cards.
116,289
0.32 stars / hour
 Paper  Code
13
Card image cap
Neural Nearest Neighbors Networks
To exploit our relaxation, we propose the neural nearest neighbors block (N3 block), a novel non-local processing layer that leverages the principle of self-similarity and can be used as building block in modern neural network architectures. We show its effectiveness for the set reasoning task of correspondence classification as well as for image restoration, including image denoising and single image super-resolution, where we outperform strong convolutional neural network (CNN) baselines and recent non-local models that rely on KNN selection in hand-chosen features spaces.
112
0.26 stars / hour
 Paper  Code
14
Card image cap
models
Models and examples built with TensorFlow
45,636
0.25 stars / hour
 Paper  Code
15
Card image cap
Learning towards Minimum Hyperspherical Energy
In light of this intuition, we reduce the redundancy regularization problem to generic energy minimization, and propose a minimum hyperspherical energy (MHE) objective as generic regularization for neural networks. Finally, we apply neural networks with MHE regularization to several challenging tasks.
71
0.22 stars / hour
 Paper  Code
16
Card image cap
Horovod: fast and easy distributed deep learning in TensorFlow
Training modern deep learning models requires large amounts of computation, often provided by GPUs. Depending on the particular methods employed, this communication may entail anywhere from negligible to significant overhead.

4,550
0.22 stars / hour
 Paper  Code
17
Card image cap
3D human pose estimation in video with temporal convolutions and semi-supervised training
We also introduce back-projection, a simple and effective semi-supervised training method that leverages unlabeled video data. We start with predicted 2D keypoints for unlabeled video, then estimate 3D poses and finally back-project to the input 2D keypoints.

191
0.22 stars / hour
 Paper  Code
18
Card image cap
Deep Residual Learning for Image Recognition
We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.
10,193
0.22 stars / hour
 Paper  Code
19
Video-to-Video Synthesis
We study the problem of video-to-video synthesis, whose goal is to learn a mapping function from an input source video (e.g., a sequence of semantic segmentation masks) to an output photorealistic video that precisely depicts the content of the source video. Without understanding temporal dynamics, directly applying existing image synthesis approaches to an input video often results in temporally incoherent videos of low visual quality.
5,276
0.21 stars / hour
 Paper  Code
20
Card image cap
code2vec: Learning Distributed Representations of Code
We demonstrate the effectiveness of our approach by using it to predict a method's name from the vector representation of its body. We evaluate our approach by training a model on a dataset of 14M methods.

221
0.19 stars / hour
 Paper  Code
21
Multi-View Stereo 3D Edge Reconstruction
This paper presents a novel method for the reconstruction of 3D edges in multi-view stereo scenarios. Previous research in the field typically relied on video sequences and limited the reconstruction process to either straight line-segments, or edge-points, i.e., 3D points that correspond to image edges.

26
0.17 stars / hour
 Paper  Code
22
Card image cap
Constrained Graph Variational Autoencoders for Molecule Design
Graphs are ubiquitous data structures for representing interactions between entities. With an emphasis on the use of graphs to represent chemical molecules, we explore the task of learning to generate graphs that conform to a distribution observed in training data.
33
0.17 stars / hour
 Paper  Code
23
Card image cap
Self-critical Sequence Training for Image Captioning
In this paper we consider the problem of optimizing image captioning systems using reinforcement learning, and show that by carefully optimizing our systems using the test metrics of the MSCOCO task, significant gains in performance can be realized. Our systems are built using a new optimization approach that we call self-critical sequence training (SCST).

39
0.17 stars / hour
 Paper  Code
24
Card image cap
Bottom-Up and Top-Down Attention for Image Captioning and Visual Question Answering
Top-down visual attention mechanisms have been used extensively in image captioning and visual question answering (VQA) to enable deeper image understanding through fine-grained analysis and even multiple steps of reasoning. In this work, we propose a combined bottom-up and top-down attention mechanism that enables attention to be calculated at the level of objects and other salient image regions.
39
0.17 stars / hour
 Paper  Code
25
Card image cap
tensor2tensor
Library of deep learning models and datasets designed to make deep learning more accessible and accelerate ML research.

6,014
0.17 stars / hour
 Paper  Code
26
Card image cap
UMAP: Uniform Manifold Approximation and Projection for Dimension Reduction
UMAP (Uniform Manifold Approximation and Projection) is a novel manifold learning technique for dimension reduction. UMAP is constructed from a theoretical framework based in Riemannian geometry and algebraic topology.
94
0.17 stars / hour
 Paper  Code
27
Card image cap
Visualizing Large-scale and High-dimensional Data
These two steps suffer from considerable computational costs, preventing the state-of-the-art methods such as the t-SNE from scaling to large-scale and high-dimensional data (e.g., millions of data points and hundreds of dimensions). We propose the LargeVis, a technique that first constructs an accurately approximated K-nearest neighbor graph from the data and then layouts the graph in the low-dimensional space.
94
0.17 stars / hour
 Paper  Code
28
Card image cap
AllenNLP: A Deep Semantic Natural Language Processing Platform
This paper describes AllenNLP, a platform for research on deep learning methods in natural language understanding. AllenNLP is designed to support researchers who want to build novel language understanding models quickly and easily.

4,508
0.16 stars / hour
 Paper  Code
29
Addressing the Fundamental Tension of PCGML with Discriminative Learning
This approach presents a fundamental tension: the more design effort expended to produce detailed training examples for shaping a generator, the lower the return on investment from applying PCGML in the first place. In response, we propose the use of discriminative models (which capture the validity of a design rather the distribution of the content) trained on positive and negative examples.

10,818
0.16 stars / hour
 Paper  Code
30
Card image cap
A generic framework for privacy preserving deep learning
We detail a new framework for privacy preserving deep learning and discuss its assets. The framework puts a premium on ownership and secure processing of data and introduces a valuable representation based on chains of commands and tensors.

2,315
0.16 stars / hour
 Paper  Code
31
Card image cap
Detectron
FAIR's research platform for object detection research, implementing popular algorithms like Mask R-CNN and RetinaNet.
17,883
0.16 stars / hour
 Paper  Code
32
Card image cap
iNNvestigate neural networks!
In recent years, deep neural networks have revolutionized many application domains of machine learning and are key components of many critical decision or predictive processes. The presented library iNNvestigate addresses this by providing a common interface and out-of-the- box implementation for many analysis methods, including the reference implementation for PatternNet and PatternAttribution as well as for LRP-methods.

148
0.15 stars / hour
 Paper  Code
33
SphereFace: Deep Hypersphere Embedding for Face Recognition
This paper addresses deep face recognition (FR) problem under open-set protocol, where ideal face features are expected to have smaller maximal intra-class distance than minimal inter-class distance under a suitably chosen metric space. However, few existing algorithms can effectively achieve this criterion.
1,017
0.15 stars / hour
 Paper  Code
34
Card image cap
Progressive Neural Architecture Search
We propose a new method for learning the structure of convolutional neural networks (CNNs) that is more efficient than recent state-of-the-art methods based on reinforcement learning and evolutionary algorithms. Our approach uses a sequential model-based optimization (SMBO) strategy, in which we search for structures in order of increasing complexity, while simultaneously learning a surrogate model to guide the search through structure space.
2,855
0.15 stars / hour
 Paper  Code
35
Card image cap
Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks
Image-to-image translation is a class of vision and graphics problems where the goal is to learn the mapping between an input image and an output image using a training set of aligned image pairs. Our goal is to learn a mapping $G: X \rightarrow Y$ such that the distribution of images from $G(X)$ is indistinguishable from the distribution $Y$ using an adversarial loss.
6,168
0.14 stars / hour
 Paper  Code
36
Card image cap
Image-to-Image Translation with Conditional Adversarial Networks
We investigate conditional adversarial networks as a general-purpose solution to image-to-image translation problems. These networks not only learn the mapping from input image to output image, but also learn a loss function to train this mapping.
6,168
0.14 stars / hour
 Paper  Code
37
Card image cap
Enriching Word Vectors with Subword Information
Continuous word representations, trained on large unlabeled corpora are useful for many natural language processing tasks. A vector representation is associated to each character $n$-gram; words being represented as the sum of these representations.

16,641
0.14 stars / hour
 Paper  Code
38
Card image cap
FastText.zip: Compressing text classification models
We consider the problem of producing compact architectures for text classification, such that the full model fits in a limited amount of memory. After considering different solutions inspired by the hashing literature, we propose a method built upon product quantization to store word embeddings.

16,641
0.14 stars / hour
 Paper  Code
39
Card image cap
Bag of Tricks for Efficient Text Classification
This paper explores a simple and efficient baseline for text classification. Our experiments show that our fast text classifier fastText is often on par with deep learning classifiers in terms of accuracy, and many orders of magnitude faster for training and evaluation.

16,641
0.14 stars / hour
 Paper  Code
40
Card image cap
Consistent Individualized Feature Attribution for Tree Ensembles
Interpreting predictions from tree ensemble methods such as gradient boosting machines and random forests is important, yet feature attribution for trees is often heuristic and not individualized for each prediction. Here we show that popular feature attribution methods are inconsistent, meaning they can lower a feature's assigned importance when the true impact of that feature actually increases.

2,845
0.14 stars / hour
 Paper  Code
41
Card image cap
An Empirical Evaluation of Generic Convolutional and Recurrent Networks for Sequence Modeling
Our results indicate that a simple convolutional architecture outperforms canonical recurrent networks such as LSTMs across a diverse range of tasks and datasets, while demonstrating longer effective memory. We conclude that the common association between sequence modeling and recurrent networks should be reconsidered, and convolutional networks should be regarded as a natural starting point for sequence modeling tasks.
1,258
0.13 stars / hour
 Paper  Code
42
Card image cap
GAN Dissection: Visualizing and Understanding Generative Adversarial Networks
We first identify a group of interpretable units that are closely related to object concepts with a segmentation-based network dissection method. Then, we quantify the causal effect of interpretable units by measuring the ability of interventions to control objects in the output.

761
0.13 stars / hour
 Paper  Code
43
Card image cap
BMXNet-v2
BMXNet v2: An Open-Source Binary Neural Network Implementation Based on MXNet

16
0.13 stars / hour
 Paper  Code
44
Card image cap
SplineCNN: Fast Geometric Deep Learning with Continuous B-Spline Kernels
We present Spline-based Convolutional Neural Networks (SplineCNNs), a variant of deep neural networks for irregular structured and geometric input, e.g., graphs or meshes. Our main contribution is a novel convolution operator based on B-splines, that makes the computation time independent from the kernel size due to the local support property of the B-spline basis functions.
826
0.13 stars / hour
 Paper  Code
45
Card image cap
HydraPlus-Net: Attentive Deep Features for Pedestrian Analysis
Pedestrian analysis plays a vital role in intelligent video surveillance and is a key component for security-centric computer vision systems. Despite that the convolutional neural networks are remarkable in learning discriminative features from images, the learning of comprehensive features of pedestrians for fine-grained tasks remains an open problem.
106
0.13 stars / hour
 Paper  Code
46
Card image cap
Deformable Convolutional Networks
Convolutional neural networks (CNNs) are inherently limited to model geometric transformations due to the fixed geometric structures in its building modules. In this work, we introduce two new modules to enhance the transformation modeling capacity of CNNs, namely, deformable convolution and deformable RoI pooling.
2,060
0.13 stars / hour
 Paper  Code
47
Card image cap
R-FCN: Object Detection via Region-based Fully Convolutional Networks
We present region-based, fully convolutional networks for accurate and efficient object detection. In contrast to previous region-based detectors such as Fast/Faster R-CNN that apply a costly per-region subnetwork hundreds of times, our region-based detector is fully convolutional with almost all computation shared on the entire image.
2,060
0.13 stars / hour
 Paper  Code
48
Card image cap
Deformable ConvNets v2: More Deformable, Better Results
The superior performance of Deformable Convolutional Networks arises from its ability to adapt to the geometric variations of objects. Through an examination of its adaptive behavior, we observe that while the spatial support for its neural features conforms more closely than regular ConvNets to object structure, this support may nevertheless extend well beyond the region of interest, causing features to be influenced by irrelevant image content.

2,060
0.13 stars / hour
 Paper  Code
49
Card image cap
Universal Language Model Fine-tuning for Text Classification
Inductive transfer learning has greatly impacted computer vision, but existing approaches in NLP still require task-specific modifications and training from scratch. We propose Universal Language Model Fine-tuning (ULMFiT), an effective transfer learning method that can be applied to any task in NLP, and introduce techniques that are key for fine-tuning a language model.

163
0.13 stars / hour
 Paper  Code
50
Card image cap
L2C
Learning to Cluster. A deep clustering strategy.
69
0.13 stars / hour
 Paper  Code
51
Card image cap
Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields
We present an approach to efficiently detect the 2D pose of multiple people in an image. The approach uses a nonparametric representation, which we refer to as Part Affinity Fields (PAFs), to learn to associate body parts with individuals in the image.
10,194
0.13 stars / hour
 Paper  Code
52
Card image cap
Hand Keypoint Detection in Single Images using Multiview Bootstrapping
We call this procedure multiview bootstrapping: first, an initial keypoint detector is used to produce noisy labels in multiple views of the hand. The method is used to train a hand keypoint detector for single images.
10,194
0.13 stars / hour
 Paper  Code
53
Card image cap
Convolutional Pose Machines
Pose Machines provide a sequential prediction framework for learning rich implicit spatial models. In this work we show a systematic design for how convolutional networks can be incorporated into the pose machine framework for learning image features and image-dependent spatial models for the task of pose estimation.
10,194
0.13 stars / hour
 Paper  Code
54
Mask R-CNN
Our approach efficiently detects objects in an image while simultaneously generating a high-quality segmentation mask for each instance. We show top results in all three tracks of the COCO suite of challenges, including instance segmentation, bounding-box object detection, and person keypoint detection.
9,092
0.12 stars / hour
 Paper  Code
55
Card image cap
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models, BERT is designed to pre-train deep bidirectional representations by jointly conditioning on both left and right context in all layers.
1,814
0.12 stars / hour
 Paper  Code
56
Card image cap
Analogical Reasoning on Chinese Morphological and Semantic Relations
Analogical reasoning is effective in capturing linguistic regularities. This paper proposes an analogical reasoning task on Chinese.
3,427
0.12 stars / hour
 Paper  Code
57
Card image cap
FaceNet: A Unified Embedding for Face Recognition and Clustering
Despite significant recent advances in the field of face recognition, implementing face verification and recognition efficiently at scale presents serious challenges to current approaches. On the widely used Labeled Faces in the Wild (LFW) dataset, our system achieves a new record accuracy of 99.63%.
6,538
0.11 stars / hour
 Paper  Code
58
High-Resolution Image Synthesis and Semantic Manipulation with Conditional GANs
We present a new method for synthesizing high-resolution photo-realistic images from semantic label maps using conditional generative adversarial networks (conditional GANs). Conditional GANs have enabled a variety of applications, but the results are often limited to low-resolution and still far from realistic.
2,687
0.11 stars / hour
 Paper  Code