Search Results for author: Kian Kenyon-Dean

Found 10 papers, 5 papers with code

Masked Autoencoders are Scalable Learners of Cellular Morphology

1 code implementation27 Sep 2023 Oren Kraus, Kian Kenyon-Dean, Saber Saberian, Maryam Fallah, Peter McLean, Jess Leung, Vasudev Sharma, Ayla Khan, Jia Balakrishnan, Safiye Celik, Maciej Sypetkowski, Chi Vicky Cheng, Kristen Morse, Maureen Makes, Ben Mabey, Berton Earnshaw

Inferring biological relationships from cellular phenotypes in high-content microscopy screens provides significant opportunity and challenge in biological research.

Deconstructing word embedding algorithms

no code implementations EMNLP 2020 Kian Kenyon-Dean, Edward Newell, Jackie Chi Kit Cheung

Word embeddings are reliable feature representations of words used to obtain high quality results for various NLP applications.

Word Embeddings

Learning Efficient Task-Specific Meta-Embeddings with Word Prisms

1 code implementation COLING 2020 Jingyi He, KC Tsiolis, Kian Kenyon-Dean, Jackie Chi Kit Cheung

Word embeddings are trained to predict word cooccurrence statistics, which leads them to possess different lexical properties (syntactic, semantic, etc.)

Word Embeddings

Deconstructing and reconstructing word embedding algorithms

no code implementations29 Nov 2019 Edward Newell, Kian Kenyon-Dean, Jackie Chi Kit Cheung

Uncontextualized word embeddings are reliable feature representations of words used to obtain high quality results for various NLP applications.

Word Embeddings

Word Embedding Algorithms as Generalized Low Rank Models and their Canonical Form

no code implementations6 Nov 2019 Kian Kenyon-Dean

We derive that both of these algorithms attempt to produce embedding inner products that approximate pointwise mutual information (PMI) statistics in the corpus.

News Classification POS +2

Clustering-Oriented Representation Learning with Attractive-Repulsive Loss

1 code implementation18 Dec 2018 Kian Kenyon-Dean, Andre Cianflone, Lucas Page-Caccia, Guillaume Rabusseau, Jackie Chi Kit Cheung, Doina Precup

The standard loss function used to train neural network classifiers, categorical cross-entropy (CCE), seeks to maximize accuracy on the training data; building useful representations is not a necessary byproduct of this objective.

Clustering General Classification +1

Resolving Event Coreference with Supervised Representation Learning and Clustering-Oriented Regularization

1 code implementation SEMEVAL 2018 Kian Kenyon-Dean, Jackie Chi Kit Cheung, Doina Precup

This work provides insight and motivating results for a new general approach to solving coreference and clustering problems with representation learning.

Clustering coreference-resolution +2

Cannot find the paper you are looking for? You can Submit a new open access paper.