no code implementations • 6 May 2023 • George Adam, Benjamin Haibe-Kains, Anna Goldenberg
Deployed machine learning models should be updated to take advantage of a larger sample size to improve performance, as more data is gathered over time.
no code implementations • 12 May 2020 • George Adam, Romain Speciel
Adversarial examples, which are slightly perturbed inputs generated with the aim of fooling a neural network, are known to transfer between models; adversaries which are effective on one model will often fool another.
no code implementations • 16 Apr 2019 • George Adam, Petr Smirnov, Benjamin Haibe-Kains, Anna Goldenberg
We investigate the transferability of adversarial examples between models using the angle between the input-output Jacobians of different models.
no code implementations • 31 Mar 2019 • George Adam, Jonathan Lorraine
This reduction in computation is enabled via weight sharing such as in Efficient Neural Architecture Search (ENAS).