no code implementations • ICLR Workshop EBM 2021 • Nick Bhattacharya, Neil Thomas, Roshan Rao, Justas Daupras, Peter K Koo, David Baker, Yun S. Song, Sergey Ovchinnikov
On the one hand, factored attention is a direct simplification of multihead scaled dot-product attention in the Transformer.
5 code implementations • NeurIPS 2019 • Roshan Rao, Nicholas Bhattacharya, Neil Thomas, Yan Duan, Xi Chen, John Canny, Pieter Abbeel, Yun S. Song
Semi-supervised learning has emerged as an important paradigm in protein modeling due to the high cost of acquiring supervised protein labels, but the current literature is fragmented when it comes to datasets and standardized evaluation techniques.