no code implementations • 18 May 2023 • Charles Jin, Martin Rinard
We present evidence that language models can learn meaning despite being trained only to perform next token prediction on text, specifically a corpus of programs.
1 code implementation • 8 May 2022 • Charles Jin, Phitchaya Mangpo Phothilimthana, Sudip Roy
To enable this approach, we also propose a novel efficient synthesis procedure, which accepts a set of promising program properties, and returns a satisfying neural architecture.
no code implementations • 29 Sep 2021 • Charles Jin, Melinda Sun, Martin Rinard
We propose an iterative training procedure for removing poisoned data from the training set.
no code implementations • 29 Sep 2021 • Charles Jin, Martin Rinard
Crucially, our models are simultaneously robust against multiple state-of-the-art adversaries, suggesting that the robustness generalizes well to \textit{unseen} adversaries.
1 code implementation • 8 May 2021 • Charles Jin, Melinda Sun, Martin Rinard
We propose a novel clustering mechanism based on an incompatibility property between subsets of data that emerges during model training.
no code implementations • 1 Jan 2021 • Charles Jin, Martin Rinard
We propose a novel setting for learning, where the input domain is the image of a map defined on the product of two sets, one of which completely determines the labels.
1 code implementation • NeurIPS 2021 • Charles Jin, Martin Rinard
We propose a novel setting for learning, where the input domain is the image of a map defined on the product of two sets, one of which completely determines the labels.
1 code implementation • 9 Mar 2020 • Charles Jin, Martin Rinard
We apply concepts from manifold regularization to develop new regularization techniques for training locally stable deep neural networks.