Search Results for author: Ahmed Allam

Found 12 papers, 6 papers with code

Two-Stage Aggregation with Dynamic Local Attention for Irregular Time Series

no code implementations13 Nov 2023 Xingyu Chen, Xiaochen Zheng, Amina Mollaysa, Manuel Schürch, Ahmed Allam, Michael Krauthammer

Here, we introduce TADA, a Two-stageAggregation process with Dynamic local Attention to harmonize time-wise and feature-wise irregularities in multivariate time series.

Irregular Time Series Time Series

Attention-based Multi-task Learning for Base Editor Outcome Prediction

no code implementations13 Nov 2023 Amina Mollaysa, Ahmed Allam, Michael Krauthammer

To speed up this process, we present an attention-based two-stage machine learning model that learns to predict the likelihood of all possible editing outcomes for a given genomic target sequence.

Multi-Task Learning

Attention-based Multi-task Learning for Base Editor Outcome Prediction

no code implementations4 Oct 2023 Amina Mollaysa, Ahmed Allam, Michael Krauthammer

To speed up this process, we present an attention-based two-stage machine learning model that learns to predict the likelihood of all possible editing outcomes for a given genomic target sequence.

Multi-Task Learning

Generating Personalized Insulin Treatments Strategies with Deep Conditional Generative Time Series Models

no code implementations28 Sep 2023 Manuel Schürch, Xiang Li, Ahmed Allam, Giulia Rathmes, Amina Mollaysa, Claudia Cavelti-Weder, Michael Krauthammer

We propose a novel framework that combines deep generative time series models with decision theory for generating personalized treatment strategies.

Time Series

SimTS: Rethinking Contrastive Representation Learning for Time Series Forecasting

1 code implementation31 Mar 2023 Xiaochen Zheng, Xingyu Chen, Manuel Schürch, Amina Mollaysa, Ahmed Allam, Michael Krauthammer

Contrastive learning methods have shown an impressive ability to learn meaningful representations for image or time series classification.

Contrastive Learning Representation Learning +3

Exploratory Analysis of Federated Learning Methods with Differential Privacy on MIMIC-III

no code implementations8 Feb 2023 Aron N. Horvath, Matteo Berchier, Farhad Nooralahzadeh, Ahmed Allam, Michael Krauthammer

Methods: We present an extensive evaluation of the impact of different federation and differential privacy techniques when training models on the open-source MIMIC-III dataset.

Federated Learning

DDoS: A Graph Neural Network based Drug Synergy Prediction Algorithm

1 code implementation3 Oct 2022 Kyriakos Schwarz, Alicia Pliego-Mendieta, Amina Mollaysa, Lara Planas-Paz, Chantal Pauli, Ahmed Allam, Michael Krauthammer

In contrast to conventional models relying on pre-computed chemical features, our GNN-based approach learns task-specific drug representations directly from the graph structure of the drugs, providing superior performance in predicting drug synergies.

AttentionDDI: Siamese Attention-based Deep Learning method for drug-drug interaction predictions

1 code implementation24 Dec 2020 Kyriakos Schwarz, Ahmed Allam, Nicolas Andres Perez Gonzalez, Michael Krauthammer

Background: Drug-drug interactions (DDIs) refer to processes triggered by the administration of two or more drugs leading to side effects beyond those observed when drugs are administered by themselves.

Patient Similarity Analysis with Longitudinal Health Data

no code implementations14 May 2020 Ahmed Allam, Matthias Dittberner, Anna Sintsova, Dominique Brodbeck, Michael Krauthammer

Healthcare professionals have long envisioned using the enormous processing powers of computers to discover new facts and medical knowledge locked inside electronic health records.

Decision Making

AutoDiscern: Rating the Quality of Online Health Information with Hierarchical Encoder Attention-based Neural Networks

1 code implementation30 Dec 2019 Laura Kinkead, Ahmed Allam, Michael Krauthammer

Patients increasingly turn to search engines and online content before, or in place of, talking with a health professional.

Misinformation

Neural networks versus Logistic regression for 30 days all-cause readmission prediction

1 code implementation22 Dec 2018 Ahmed Allam, Mate Nagy, George Thoma, Michael Krauthammer

Among the deep learning approaches, a recurrent neural network (RNN) combined with conditional random fields (CRF) model (RNNCRF) achieved the best performance in readmission prediction with 0. 642 AUC (95% CI, 0. 640-0. 645).

Management Readmission Prediction +1

Cannot find the paper you are looking for? You can Submit a new open access paper.