1 code implementation • 25 Jan 2024 • Talip Ucar, Aubin Ramon, Dino Oglic, Rebecca Croasdale-Wood, Tom Diethe, Pietro Sormanni
We investigate the potential of patent data for improving the antibody humanness prediction using a multi-stage, multi-loss training process.
1 code implementation • 15 Mar 2023 • Talip Ucar
We present a framework for learning Node Embeddings from Static Subgraphs (NESS) using a graph autoencoder (GAE) in a transductive setting.
Ranked #1 on Link Prediction on Cora
no code implementations • 5 Aug 2022 • Talip Ucar, Ehsan Hajiramezanali
We conduct extensive experiments on a variety of synthetic and real-world data, demonstrating that the XTab can be used to obtain the global feature importance that is not sensitive to sub-optimal model initialisation.
2 code implementations • NeurIPS 2021 • Talip Ucar, Ehsan Hajiramezanali, Lindsay Edwards
Self-supervised learning has been shown to be very effective in learning useful representations, and yet much of the success is achieved in data types such as images, audio, and text.
Ranked #4 on Unsupervised MNIST on MNIST
no code implementations • 29 Sep 2021 • Ehsan Hajiramezanali, Talip Ucar, Lindsay Edwards
First, they are not stochastic processes, leading to poor uncertainty estimations over their predictions.
1 code implementation • 19 Jul 2020 • Talip Ucar, Adrian Gonzalez-Martin, Matthew Lee, Adrian Daniel Szwarc
Humans can infer a great deal about the meaning of a word, using the syntax and semantics of surrounding words even if it is their first time reading or hearing it.
no code implementations • 31 Oct 2019 • Talip Ucar
Our approach is to use a generative model that produces 2-D images as projections of a latent 3D voxel grid, which we train either as a variational auto-encoder or using adversarial methods.
no code implementations • 29 Oct 2019 • Talip Ucar
The $\mu$-VAE is less prone to posterior collapse, and can generate reconstructions and new samples in good quality.