no code implementations • 21 Nov 2022 • Dimitris Achlioptas, Amrit Daswaney, Periklis A. Papakonstantinou
As a first indication of what can be achieved with the new generator, we present a novel classifier that performs significantly better than random guessing 99% on the same datasets, for most difficulty levels.
1 code implementation • NeurIPS 2020 • Shengchao Liu, Dimitris Papailiopoulos, Dimitris Achlioptas
Several works have aimed to explain why overparameterized neural networks generalize well when trained by Stochastic Gradient Descent (SGD).
no code implementations • ACL 2017 • Alex Gittens, Dimitris Achlioptas, Michael W. Mahoney
An unexpected {``}side-effect{''} of such models is that their vectors often exhibit compositionality, i. e., \textit{adding}two word-vectors results in a vector that is only a small angle away from the vector of a word representing the semantic composite of the original words, e. g., {``}man{''} + {``}royal{''} = {``}king{''}.
no code implementations • NeurIPS 2013 • Dimitris Achlioptas, Zohar Karnin, Edo Liberty
We consider the problem of selecting non-zero entries of a matrix $A$ in order to produce a sparse sketch of it, $B$, that minimizes $\|A-B\|_2$.