Search Results for author: Tobias Glasmachers

Found 38 papers, 5 papers with code

ProtoP-OD: Explainable Object Detection with Prototypical Parts

no code implementations29 Feb 2024 Pavlos Rath-Manakidis, Frederik Strothmann, Tobias Glasmachers, Laurenz Wiskott

Interpretation and visualization of the behavior of detection transformers tends to highlight the locations in the image that the model attends to, but it provides limited insight into the \emph{semantics} that the model is focusing on.

Object object-detection +1

Ruhr Hand Motion Catalog of Human Center-Out Transport Trajectories in 3D Task-Space Captured by a Redundant Measurement System

no code implementations31 Dec 2023 Tim Sziburis, Susanne Blex, Tobias Glasmachers, Ioannis Iossifidis

We introduce a systematic dataset of 3D center-out task-space trajectories of human hand transport movements in a natural setting.

Leveraging Topological Maps in Deep Reinforcement Learning for Multi-Object Navigation

no code implementations16 Oct 2023 Simon Hakenes, Tobias Glasmachers

This work addresses the challenge of navigating expansive spaces with sparse rewards through Reinforcement Learning (RL).

reinforcement-learning Reinforcement Learning (RL)

ContainerGym: A Real-World Reinforcement Learning Benchmark for Resource Allocation

1 code implementation6 Jul 2023 Abhijeet Pendyala, Justin Dettmer, Tobias Glasmachers, Asma Atamna

It is sufficiently versatile to evaluate reinforcement learning algorithms on any real-world problem that fits our resource allocation framework.

Decision Making reinforcement-learning

Recipe for Fast Large-scale SVM Training: Polishing, Parallelism, and more RAM!

no code implementations3 Jul 2022 Tobias Glasmachers

Support vector machines (SVMs) are a standard method in the machine learning toolbox, in particular for tabular data.

ConTraNet: A single end-to-end hybrid network for EEG-based and EMG-based human machine interfaces

no code implementations21 Jun 2022 Omair Ali, Muhammad Saif-ur-Rehman, Tobias Glasmachers, Ioannis Iossifidis, Christian Klaes

Approach: In this work, we introduce a single hybrid model called ConTraNet, which is based on CNN and Transformer architectures that is equally useful for EEG-HMI and EMG-HMI paradigms.

EEG Electroencephalogram (EEG) +3

From Motion to Muscle

no code implementations27 Jan 2022 Marie D. Schmidt, Tobias Glasmachers, Ioannis Iossifidis

Voluntary human motion is the product of muscle activity that results from upstream motion planning of the motor cortical areas.

Motion Planning

Deep Transfer-Learning for patient specific model re-calibration: Application to sEMG-Classification

no code implementations30 Dec 2021 Stephan Johann Lehmler, Muhammad Saif-ur-Rehman, Tobias Glasmachers, Ioannis Iossifidis

In this study, we investigate the effectiveness of transfer learning using weight initialization for recalibration of two different pretrained deep learning models on a new subjects data, and compare their performance to subject-specific models.

Domain Adaptation Transfer Learning

The (1+1)-ES Reliably Overcomes Saddle Points

no code implementations1 Dec 2021 Tobias Glasmachers

It is non-standard in that we do not even aim to estimate hitting times based on drift.

Anchored-STFT and GNAA: An extension of STFT in conjunction with an adversarial data augmentation technique for the decoding of neural signals

no code implementations30 Nov 2020 Omair Ali, Muhammad Saif-ur-Rehman, Susanne Dyck, Tobias Glasmachers, Ioannis Iossifidis, Christian Klaes

GNAA is not only an augmentation method but is also used to harness adversarial inputs in EEG data, which not only improves the classification accuracy but also enhances the robustness of the classifier.

Classification Data Augmentation +2

Non-local Optimization: Imposing Structure on Optimization Problems by Relaxation

no code implementations11 Nov 2020 Nils Müller, Tobias Glasmachers

In stochastic optimization, particularly in evolutionary computation and reinforcement learning, the optimization of a function $f: \Omega \to \mathbb{R}$ is often addressed through optimizing a so-called relaxation $\theta \in \Theta \mapsto \mathbb{E}_\theta(f)$ of $f$, where $\Theta$ resembles the parameters of a family of probability measures on $\Omega$.

reinforcement-learning Reinforcement Learning (RL) +1

Latent Representation Prediction Networks

no code implementations20 Sep 2020 Hlynur Davíð Hlynsson, Merlin Schüler, Robin Schiewer, Tobias Glasmachers, Laurenz Wiskott

The prediction function is used as a forward model for search on a graph in a viewpoint-matching task and the representation learned to maximize predictability is found to outperform a pre-trained representation.

Navigate

Methods of the Vehicle Re-identification

no code implementations14 Sep 2020 Mohamed Nafzi, Michael Brauckmann, Tobias Glasmachers

It will be explained in detail how to improve the performance of this method using a trained network, which is designed for the classification.

Classification General Classification +2

Convergence Analysis of the Hessian Estimation Evolution Strategy

no code implementations6 Sep 2020 Tobias Glasmachers, Oswin Krause

The class of algorithms called Hessian Estimation Evolution Strategies (HE-ESs) update the covariance matrix of their sampling distribution by directly estimating the curvature of the objective function.

Analyzing Reinforcement Learning Benchmarks with Random Weight Guessing

1 code implementation16 Apr 2020 Declan Oller, Tobias Glasmachers, Giuseppe Cuccu

We propose a novel method for analyzing and visualizing the complexity of standard reinforcement learning (RL) benchmarks based on score distributions.

OpenAI Gym reinforcement-learning +1

The Hessian Estimation Evolution Strategy

no code implementations30 Mar 2020 Tobias Glasmachers, Oswin Krause

We demonstrate that our approach to covariance matrix adaptation is efficient by evaluation it on the BBOB/COCO testbed.

Vehicle Shape and Color Classification Using Convolutional Neural Network

no code implementations15 May 2019 Mohamed Nafzi, Michael Brauckmann, Tobias Glasmachers

In order to facilitate and accelerate the progress in this subject, we will present our way to collect and to label a large scale data set.

Classification General Classification

Challenges of Convex Quadratic Bi-objective Benchmark Problems

no code implementations23 Oct 2018 Tobias Glasmachers

In this paper we analyze the specific challenges that can be posed by quadratic functions in the bi-objective case.

Multi-Merge Budget Maintenance for Stochastic Gradient Descent SVM Training

no code implementations26 Jun 2018 Sahar Qaadan, Tobias Glasmachers

Budgeted Stochastic Gradient Descent (BSGD) is a state-of-the-art technique for training large-scale kernelized support vector machines.

Speeding Up Budgeted Stochastic Gradient Descent SVM Training with Precomputed Golden Section Search

no code implementations26 Jun 2018 Tobias Glasmachers, Sahar Qaadan

Limiting the model size of a kernel support vector machine to a pre-defined budget is a well-established technique that allows to scale SVM learning and prediction to large-scale data.

Dual SVM Training on a Budget

no code implementations26 Jun 2018 Sahar Qaadan, Merlin Schüler, Tobias Glasmachers

We present a dual subspace ascent algorithm for support vector machine training that respects a budget constraint limiting the number of support vectors.

Challenges in High-dimensional Reinforcement Learning with Evolution Strategies

1 code implementation4 Jun 2018 Nils Müller, Tobias Glasmachers

Our results give insights into which algorithmic mechanisms of modern ES are of value for the class of problems at hand, and they reveal principled limitations of the approach.

reinforcement-learning Reinforcement Learning (RL) +1

Drift Theory in Continuous Search Spaces: Expected Hitting Time of the (1+1)-ES with 1/5 Success Rule

no code implementations9 Feb 2018 Youhei Akimoto, Anne Auger, Tobias Glasmachers

This paper explores the use of the standard approach for proving runtime bounds in discrete domains---often referred to as drift analysis---in the context of optimization on a continuous domain.

Global Convergence of the (1+1) Evolution Strategy

no code implementations9 Jun 2017 Tobias Glasmachers

We establish global convergence of the (1+1) evolution strategy, i. e., convergence to a critical point independent of the initial state.

Evolutionary Algorithms

Limited-Memory Matrix Adaptation for Large Scale Black-box Optimization

2 code implementations18 May 2017 Ilya Loshchilov, Tobias Glasmachers, Hans-Georg Beyer

The Covariance Matrix Adaptation Evolution Strategy (CMA-ES) is a popular method to deal with nonconvex and/or stochastic optimization problems when the gradient information is not available.

Stochastic Optimization

Limits of End-to-End Learning

no code implementations26 Apr 2017 Tobias Glasmachers

End-to-end learning system is specifically designed so that all modules are differentiable.

Representation Learning

Anytime Bi-Objective Optimization with a Hybrid Multi-Objective CMA-ES (HMO-CMA-ES)

no code implementations9 May 2016 Ilya Loshchilov, Tobias Glasmachers

We propose a multi-objective optimization algorithm aimed at achieving good anytime performance over a wide range of problems.

Benchmarking

Fast model selection by limiting SVM training times

no code implementations10 Feb 2016 Aydin Demircioglu, Daniel Horn, Tobias Glasmachers, Bernd Bischl, Claus Weihs

Kernelized Support Vector Machines (SVMs) are among the best performing supervised learning methods.

Model Selection

Coordinate Descent with Online Adaptation of Coordinate Frequencies

no code implementations15 Jan 2014 Tobias Glasmachers, Ürün Dogan

Coordinate descent (CD) algorithms have become the method of choice for solving a number of optimization problems in machine learning.

BIG-bench Machine Learning General Classification +1

The Planning-ahead SMO Algorithm

no code implementations31 Jul 2013 Tobias Glasmachers

The sequential minimal optimization (SMO) algorithm and variants thereof are the de facto standard method for solving large quadratic programs for support vector machine (SVM) training.

Natural Evolution Strategies

1 code implementation22 Jun 2011 Daan Wierstra, Tom Schaul, Tobias Glasmachers, Yi Sun, Jürgen Schmidhuber

This paper presents Natural Evolution Strategies (NES), a recent family of algorithms that constitute a more principled approach to black-box optimization than established evolutionary algorithms.

Evolutionary Algorithms

Cannot find the paper you are looking for? You can Submit a new open access paper.