Search Results for author: Sergey Ioffe

Found 11 papers, 5 papers with code

Weighted Ensemble Self-Supervised Learning

no code implementations18 Nov 2022 Yangjun Ruan, Saurabh Singh, Warren Morningstar, Alexander A. Alemi, Sergey Ioffe, Ian Fischer, Joshua V. Dillon

Ensembling has proven to be a powerful technique for boosting model performance, uncertainty estimation, and robustness in supervised learning.

Self-Supervised Learning

Extreme normalization: approximating full-data batch normalization with single examples

no code implementations29 Sep 2021 Sergey Ioffe

While batch normalization has been successful in speeding up the training of neural networks, it is not well understood.

Towards a Semantic Perceptual Image Metric

no code implementations1 Aug 2018 Troy Chinen, Johannes Ballé, Chunhui Gu, Sung Jin Hwang, Sergey Ioffe, Nick Johnston, Thomas Leung, David Minnen, Sean O'Malley, Charles Rosenberg, George Toderici

We present a full reference, perceptual image metric based on VGG-16, an artificial neural network trained on object classification.

Image Quality Assessment

No Fuss Distance Metric Learning using Proxies

2 code implementations ICCV 2017 Yair Movshovitz-Attias, Alexander Toshev, Thomas K. Leung, Sergey Ioffe, Saurabh Singh

Traditionally, for this problem supervision is expressed in the form of sets of points that follow an ordinal relationship -- an anchor point $x$ is similar to a set of positive points $Y$, and dissimilar to a set of negative points $Z$, and a loss defined over these distances is minimized.

Metric Learning Semantic Similarity +2

Batch Renormalization: Towards Reducing Minibatch Dependence in Batch-Normalized Models

5 code implementations NeurIPS 2017 Sergey Ioffe

However, its effectiveness diminishes when the training minibatches are small, or do not consist of independent samples.

Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning

86 code implementations23 Feb 2016 Christian Szegedy, Sergey Ioffe, Vincent Vanhoucke, Alex Alemi

Recently, the introduction of residual connections in conjunction with a more traditional architecture has yielded state-of-the-art performance in the 2015 ILSVRC challenge; its performance was similar to the latest generation Inception-v3 network.

Classification General Classification +1

Batch Normalization: Accelerating Deep Network Training byReducing Internal Covariate Shift

no code implementations ICML 2015 2015 Sergey Ioffe, Christian Szegedy

Training Deep Neural Networks is complicated by the factthat the distribution of each layer’s inputs changes duringtraining, as the parameters of the previous layers change. This slows down the training by requiring lower learningrates and careful parameter initialization, and makes it no-toriously hard to train models with saturating nonlineari-ties.

General Classification Image Classification

Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift

74 code implementations11 Feb 2015 Sergey Ioffe, Christian Szegedy

Training Deep Neural Networks is complicated by the fact that the distribution of each layer's inputs changes during training, as the parameters of the previous layers change.

Ranked #487 on Image Classification on ImageNet (Number of params metric)

General Classification Image Classification

Scalable, High-Quality Object Detection

no code implementations3 Dec 2014 Christian Szegedy, Scott Reed, Dumitru Erhan, Dragomir Anguelov, Sergey Ioffe

Using the multi-scale convolutional MultiBox (MSC-MultiBox) approach, we substantially advance the state-of-the-art on the ILSVRC 2014 detection challenge data set, with $0. 5$ mAP for a single model and $0. 52$ mAP for an ensemble of two models.

Object object-detection +2

Deep Convolutional Ranking for Multilabel Image Annotation

no code implementations17 Dec 2013 Yunchao Gong, Yangqing Jia, Thomas Leung, Alexander Toshev, Sergey Ioffe

Multilabel image annotation is one of the most important challenges in computer vision with many real-world applications.

Cannot find the paper you are looking for? You can Submit a new open access paper.