Search Results for author: Dominic Kafka

Found 5 papers, 1 papers with code

GOALS: Gradient-Only Approximations for Line Searches Towards Robust and Consistent Training of Deep Neural Networks

no code implementations23 May 2021 Younghwan Chae, Daniel N. Wilke, Dominic Kafka

The results show that training a model with the recommended learning rate for a class of search directions helps to reduce the model errors in multimodal cases.

Gradient-only line searches to automatically determine learning rates for a variety of stochastic training algorithms

1 code implementation29 Jun 2020 Dominic Kafka, Daniel Nicolas Wilke

Gradient-only and probabilistic line searches have recently reintroduced the ability to adaptively determine learning rates in dynamic mini-batch sub-sampled neural network training.

Resolving learning rates adaptively by locating Stochastic Non-Negative Associated Gradient Projection Points using line searches

no code implementations15 Jan 2020 Dominic Kafka, Daniel. N. Wilke

This study proposes gradient-only line searches to resolve the learning rate for neural network training algorithms.

Gradient-only line searches: An Alternative to Probabilistic Line Searches

no code implementations22 Mar 2019 Dominic Kafka, Daniel Wilke

Line searches are capable of adaptively resolving learning rate schedules.

Traversing the noise of dynamic mini-batch sub-sampled loss functions: A visual guide

no code implementations20 Mar 2019 Dominic Kafka, Daniel Wilke

Mini-batch sub-sampling in neural network training is unavoidable, due to growing data demands, memory-limited computational resources such as graphical processing units (GPUs), and the dynamics of on-line learning.

Stochastic Optimization

Cannot find the paper you are looking for? You can Submit a new open access paper.