Search Results for author: Charles Jin

Found 8 papers, 4 papers with code

Evidence of Meaning in Language Models Trained on Programs

no code implementations18 May 2023 Charles Jin, Martin Rinard

We present evidence that language models can learn meaning despite being trained only to perform next token prediction on text, specifically a corpus of programs.

Inductive Bias Language Modelling +1

Neural Architecture Search using Property Guided Synthesis

1 code implementation8 May 2022 Charles Jin, Phitchaya Mangpo Phothilimthana, Sudip Roy

To enable this approach, we also propose a novel efficient synthesis procedure, which accepts a set of promising program properties, and returns a satisfying neural architecture.

Neural Architecture Search

Defending Against Backdoor Attacks Using Ensembles of Weak Learners

no code implementations29 Sep 2021 Charles Jin, Melinda Sun, Martin Rinard

We propose an iterative training procedure for removing poisoned data from the training set.

Backdoor Attack Data Poisoning

Efficient Regularization for Adversarially Robustness Deep ReLU Networks

no code implementations29 Sep 2021 Charles Jin, Martin Rinard

Crucially, our models are simultaneously robust against multiple state-of-the-art adversaries, suggesting that the robustness generalizes well to \textit{unseen} adversaries.

Incompatibility Clustering as a Defense Against Backdoor Poisoning Attacks

1 code implementation8 May 2021 Charles Jin, Melinda Sun, Martin Rinard

We propose a novel clustering mechanism based on an incompatibility property between subsets of data that emerges during model training.

Clustering Data Poisoning +1

Context-Agnostic Learning Using Synthetic Data

no code implementations1 Jan 2021 Charles Jin, Martin Rinard

We propose a novel setting for learning, where the input domain is the image of a map defined on the product of two sets, one of which completely determines the labels.

Classification Few-Shot Learning +2

Towards Context-Agnostic Learning Using Synthetic Data

1 code implementation NeurIPS 2021 Charles Jin, Martin Rinard

We propose a novel setting for learning, where the input domain is the image of a map defined on the product of two sets, one of which completely determines the labels.

Few-Shot Learning Image Classification +1

Manifold Regularization for Locally Stable Deep Neural Networks

1 code implementation9 Mar 2020 Charles Jin, Martin Rinard

We apply concepts from manifold regularization to develop new regularization techniques for training locally stable deep neural networks.

Cannot find the paper you are looking for? You can Submit a new open access paper.