Search Results for author: Tristan Bepler

Found 9 papers, 7 papers with code

Unsupervised Object Representation Learning using Translation and Rotation Group Equivariant VAE

1 code implementation24 Oct 2022 Alireza Nasiri, Tristan Bepler

Here, we consider the problem of learning semantic representations of objects that are invariant to pose and location in a fully unsupervised manner.

Learning Semantic Representations Object +3

Few Shot Protein Generation

no code implementations3 Apr 2022 Soumya Ram, Tristan Bepler

We present the MSA-to-protein transformer, a generative model of protein sequences conditioned on protein families represented by multiple sequence alignments (MSAs).

Multiple Sequence Alignment

Learning to automate cryo-electron microscopy data collection with Ptolemy

1 code implementation1 Dec 2021 Paul T. Kim, Alex J. Noble, Anchi Cheng, Tristan Bepler

Automating this is non-trivial: the images suffer from low signal-to-noise ratio and are affected by a range of experimental parameters that can differ for each collection session.

Cryogenic Electron Microscopy (cryo-EM) Navigate

Explicitly disentangling image content from translation and rotation with spatial-VAE

1 code implementation NeurIPS 2019 Tristan Bepler, Ellen D. Zhong, Kotaro Kelley, Edward Brignole, Bonnie Berger

Given an image dataset, we are often interested in finding data generative factors that encode semantic content independently from pose variables such as rotation and translation.

Disentanglement Translation

Reconstructing continuous distributions of 3D protein structure from cryo-EM images

2 code implementations ICLR 2020 Ellen D. Zhong, Tristan Bepler, Joseph H. Davis, Bonnie Berger

Cryo-electron microscopy (cryo-EM) is a powerful technique for determining the structure of proteins and other macromolecular complexes at near-atomic resolution.

3D Volumetric Reconstruction Clustering +2

Learning protein sequence embeddings using information from structure

1 code implementation ICLR 2019 Tristan Bepler, Bonnie Berger

We introduce a framework that maps any protein sequence to a sequence of vector embeddings --- one per amino acid position --- that encode structural information.

Position Representation Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.