Search Results for author: Maria Kalweit

Found 7 papers, 0 papers with code

CellMixer: Annotation-free Semantic Cell Segmentation of Heterogeneous Cell Populations

no code implementations1 Dec 2023 Mehdi Naouar, Gabriel Kalweit, Anusha Klett, Yannick Vogt, Paula Silvestrini, Diana Laura Infante Ramirez, Roland Mertelsmann, Joschka Boedecker, Maria Kalweit

In recent years, several unsupervised cell segmentation methods have been presented, trying to omit the requirement of laborious pixel-level annotations for the training of a cell segmentation model.

Cell Segmentation Instance Segmentation +2

Stable Online and Offline Reinforcement Learning for Antibody CDRH3 Design

no code implementations29 Nov 2023 Yannick Vogt, Mehdi Naouar, Maria Kalweit, Christoph Cornelius Miething, Justus Duyster, Roland Mertelsmann, Gabriel Kalweit, Joschka Boedecker

The field of antibody-based therapeutics has grown significantly in recent years, with targeted antibodies emerging as a potentially effective approach to personalized therapies.

reinforcement-learning

Multi-intention Inverse Q-learning for Interpretable Behavior Representation

no code implementations23 Nov 2023 Hao Zhu, Brice De La Crompe, Gabriel Kalweit, Artur Schneider, Maria Kalweit, Ilka Diester, Joschka Boedecker

In advancing the understanding of decision-making processes, Inverse Reinforcement Learning (IRL) have proven instrumental in reconstructing animal's multiple intentions amidst complex behaviors.

Decision Making Q-Learning

Robust Tumor Detection from Coarse Annotations via Multi-Magnification Ensembles

no code implementations29 Mar 2023 Mehdi Naouar, Gabriel Kalweit, Ignacio Mastroleo, Philipp Poxleitner, Marc Metzger, Joschka Boedecker, Maria Kalweit

In this work, we put the focus back on tumor localization in form of a patch-level classification task and take up the setting of so-called coarse annotations, which provide greater training supervision while remaining feasible from a clinical standpoint.

Multiple Instance Learning whole slide images

NeuRL: Closed-form Inverse Reinforcement Learning for Neural Decoding

no code implementations10 Apr 2022 Gabriel Kalweit, Maria Kalweit, Mansour Alyahyay, Zoe Jaeckel, Florian Steenbergen, Stefanie Hardung, Thomas Brox, Ilka Diester, Joschka Boedecker

However, since generally there is a strong connection between learning of subjects and their expectations on long-term rewards, we propose NeuRL, an inverse reinforcement learning approach that (1) extracts an intrinsic reward function from collected trajectories of a subject in closed form, (2) maps neural signals to this intrinsic reward to account for long-term dependencies in the behavior and (3) predicts the simulated behavior for unseen neural signals by extracting Q-values and the corresponding Boltzmann policy based on the intrinsic reward values for these unseen neural signals.

reinforcement-learning Reinforcement Learning (RL)

Robust and Data-efficient Q-learning by Composite Value-estimation

no code implementations29 Sep 2021 Gabriel Kalweit, Maria Kalweit, Joschka Boedecker

In the past few years, off-policy reinforcement learning methods have shown promising results in their application for robot control.

Q-Learning

Deep Surrogate Q-Learning for Autonomous Driving

no code implementations21 Oct 2020 Maria Kalweit, Gabriel Kalweit, Moritz Werling, Joschka Boedecker

Challenging problems of deep reinforcement learning systems with regard to the application on real systems are their adaptivity to changing environments and their efficiency w. r. t.

Autonomous Driving Q-Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.