no code implementations • 22 Apr 2022 • Seyran Saeedi, Aliakbar Panahi, Tom Arodz
While the availability of data for training machine learning models is steadily increasing, oftentimes it is much easier to collect feature vectors that to obtain the corresponding labels.
1 code implementation • NeurIPS 2021 • Aliakbar Panahi, Seyran Saeedi, Tom Arodz
Language models employ a very large number of trainable parameters.
1 code implementation • ICLR 2020 • Aliakbar Panahi, Seyran Saeedi, Tom Arodz
Our approach achieves a hundred-fold or more reduction in the space required to store the embeddings with almost no relative drop in accuracy in practical natural language processing tasks.
no code implementations • 18 Oct 2019 • Xi Gao, Han Zhang, Aliakbar Panahi, Tom Arodz
When samples have internal structure, we often see a mismatch between the objective optimized during training and the model's goal during inference.
no code implementations • ICML 2020 • Han Zhang, Xi Gao, Jacob Unterman, Tom Arodz
Neural ODEs and i-ResNet are recently proposed methods for enforcing invertibility of residual neural models.
no code implementations • 5 Feb 2019 • Seyran Saeedi, Tom Arodz
We analyze the computational complexity of Quantum Sparse Support Vector Machine, a linear classifier that minimizes the hinge loss and the $L_1$ norm of the feature weights vector and relies on a quantum linear programming solver instead of a classical solver.