Search Results for author: Nikolas Ioannou

Found 7 papers, 2 papers with code

Towards a General Framework for ML-based Self-tuning Databases

no code implementations16 Nov 2020 Thomas Schmied, Diego Didona, Andreas Döring, Thomas Parnell, Nikolas Ioannou

Machine learning (ML) methods have recently emerged as an effective way to perform automated parameter tuning of databases.

Bayesian Optimization Reinforcement Learning (RL) +1

SnapBoost: A Heterogeneous Boosting Machine

2 code implementations NeurIPS 2020 Thomas Parnell, Andreea Anghel, Malgorzata Lazuka, Nikolas Ioannou, Sebastian Kurella, Peshal Agarwal, Nikolaos Papandreou, Haralampos Pozidis

At each boosting iteration, their goal is to find the base hypothesis, selected from some base hypothesis class, that is closest to the Newton descent direction in a Euclidean sense.

Compiling Neural Networks for a Computational Memory Accelerator

1 code implementation5 Mar 2020 Kornilios Kourtis, Martino Dazzi, Nikolas Ioannou, Tobias Grosser, Abu Sebastian, Evangelos Eleftheriou

Computational memory (CM) is a promising approach for accelerating inference on neural networks (NN) by using enhanced memories that, in addition to storing data, allow computations on them.

SySCD: A System-Aware Parallel Coordinate Descent Algorithm

no code implementations NeurIPS 2019 Nikolas Ioannou, Celestine Mendler-Dünner, Thomas Parnell

In this paper we propose a novel parallel stochastic coordinate descent (SCD) algorithm with convergence guarantees that exhibits strong scalability.

Breadth-first, Depth-next Training of Random Forests

no code implementations15 Oct 2019 Andreea Anghel, Nikolas Ioannou, Thomas Parnell, Nikolaos Papandreou, Celestine Mendler-Dünner, Haris Pozidis

In this paper we analyze, evaluate, and improve the performance of training Random Forest (RF) models on modern CPU architectures.

Parallel training of linear models without compromising convergence

no code implementations5 Nov 2018 Nikolas Ioannou, Celestine Dünner, Kornilios Kourtis, Thomas Parnell

The combined set of optimizations result in a consistent bottom line speedup in convergence of up to 12x compared to the initial asynchronous parallel training algorithm and up to 42x, compared to state of the art implementations (scikit-learn and h2o) on a range of multi-core CPU architectures.

Cannot find the paper you are looking for? You can Submit a new open access paper.