no code implementations • 25 Oct 2021 • Helen Ngo, João G. M. Araújo, Jeffrey Hui, Nicholas Frosst
The One Billion Word Benchmark is a dataset derived from the WMT 2011 News Crawl, commonly used to measure language modeling ability in natural language processing.
no code implementations • 4 Aug 2021 • Helen Ngo, Cooper Raterink, João G. M. Araújo, Ivan Zhang, Carol Chen, Adrien Morisot, Nicholas Frosst
Language models trained on large-scale unfiltered datasets curated from the open web acquire systemic biases, prejudices, and harmful views from their training data.
6 code implementations • NeurIPS 2021 • Rishabh Agarwal, Levi Melnick, Nicholas Frosst, Xuezhou Zhang, Ben Lengerich, Rich Caruana, Geoffrey Hinton
They perform similarly to existing state-of-the-art generalized additive models in accuracy, but are more flexible because they are based on neural nets instead of boosted trees.
no code implementations • 18 Feb 2020 • Yao Qin, Nicholas Frosst, Colin Raffel, Garrison Cottrell, Geoffrey Hinton
There has been an ongoing cycle where stronger defenses against adversarial attacks are subsequently broken by a more advanced defense-aware attack.
no code implementations • ICLR 2020 • Yao Qin, Nicholas Frosst, Sara Sabour, Colin Raffel, Garrison Cottrell, Geoffrey Hinton
Then, we diagnose the adversarial examples for CapsNets and find that the success of the reconstructive attack is highly related to the visual similarity between the source and target class.
4 code implementations • 5 Feb 2019 • Nicholas Frosst, Nicolas Papernot, Geoffrey Hinton
We explore and expand the $\textit{Soft Nearest Neighbor Loss}$ to measure the $\textit{entanglement}$ of class manifolds in representation space: i. e., how close pairs of points from the same class are relative to pairs of points from different classes.
1 code implementation • 20 Dec 2018 • Calden Wloka, Toni Kunić, Iuliia Kotseruba, Ramin Fahimi, Nicholas Frosst, Neil D. B. Bruce, John K. Tsotsos
The Saliency Model Implementation Library for Experimental Research (SMILER) is a new software package which provides an open, standardized, and extensible framework for maintaining and executing computational saliency models.
no code implementations • 16 Nov 2018 • Nicholas Frosst, Sara Sabour, Geoffrey Hinton
In addition to being trained to classify images, the capsule model is trained to reconstruct the images from the pose parameters and identity of the correct top-level capsule.
2 code implementations • ICLR 2018 • Geoffrey E. Hinton, Sara Sabour, Nicholas Frosst
A capsule in one layer votes for the pose matrix of many different capsules in the layer above by multiplying its own pose matrix by trainable viewpoint-invariant transformation matrices that could learn to represent part-whole relationships.
Ranked #4 on Image Classification on smallNORB
6 code implementations • 27 Nov 2017 • Nicholas Frosst, Geoffrey Hinton
They excel when the input data is high dimensional, the relationship between the input and the output is complicated, and the number of labeled training examples is large.
78 code implementations • NeurIPS 2017 • Sara Sabour, Nicholas Frosst, Geoffrey E. Hinton
We use the length of the activity vector to represent the probability that the entity exists and its orientation to represent the instantiation parameters.
Ranked #1 on Image Classification on MultiMNIST