no code implementations • 29 Apr 2024 • Christopher J. Kymn, Sonia Mazelet, Annabel Ng, Denis Kleyko, Bruno A. Olshausen
We propose a system for visual scene analysis and recognition based on encoding the sparse, latent feature-representation of an image into a high-dimensional vector that is subsequently factorized to parse scene content.
no code implementations • 8 Nov 2023 • Christopher J. Kymn, Denis Kleyko, E. Paxon Frady, Connor Bybee, Pentti Kanerva, Friedrich T. Sommer, Bruno A. Olshausen
We introduce Residue Hyperdimensional Computing, a computing framework that unifies residue number systems with an algebra defined over random, high-dimensional vectors.
no code implementations • 26 May 2023 • Denis Kleyko, Connor Bybee, Ping-Chen Huang, Christopher J. Kymn, Bruno A. Olshausen, E. Paxon Frady, Friedrich T. Sommer
In particular, we find that the decoding techniques from the sparse coding and compressed sensing literature (rarely used for Hyperdimensional Computing/Vector Symbolic Architectures) are also well-suited for decoding information from the compositional distributed representations.
no code implementations • 23 Mar 2023 • E. Paxon Frady, Spencer Kent, Quinn Tran, Pentti Kanerva, Bruno A. Olshausen, Friedrich T. Sommer
In contrast to learning category labels, here we train deep neural networks to output the full compositional vector description of an input image.
no code implementations • 7 Dec 2022 • Connor Bybee, Denis Kleyko, Dmitri E. Nikonov, Amir Khosrowshahi, Bruno A. Olshausen, Friedrich T. Sommer
A prominent approach to solving combinatorial optimization problems on parallel hardware is Ising machines, i. e., hardware implementations of networks of interacting binary spin variables.
no code implementations • 28 Aug 2022 • Ping-Chen Huang, Denis Kleyko, Jan M. Rabaey, Bruno A. Olshausen, Pentti Kanerva
With only 1. 02k active parameters and a 128-minute pass through the training data we achieve Top-1 and Top-5 scores of 31% and 52% on the VoxCeleb1 dataset of 1, 251 speakers.
no code implementations • 26 Aug 2022 • Alpha Renner, Lazar Supic, Andreea Danielescu, Giacomo Indiveri, Bruno A. Olshausen, Yulia Sandamirskaya, Friedrich T. Sommer, E. Paxon Frady
Understanding a visual scene by inferring identities and poses of its individual objects is still and open problem.
no code implementations • 23 Apr 2022 • Michael Y. -S. Fang, Mayur Mudigonda, Ryan Zarcone, Amir Khosrowshahi, Bruno A. Olshausen
Moreover we show that Langevin dynamics lead to an efficient procedure for sampling from the posterior distribution in the 'L0 sparse' regime, where latent variables are encouraged to be set to zero as opposed to having a small L1 norm.
no code implementations • 2 Mar 2022 • Denis Kleyko, Connor Bybee, Christopher J. Kymn, Bruno A. Olshausen, Amir Khosrowshahi, Dmitri E. Nikonov, Friedrich T. Sommer, E. Paxon Frady
In this paper, we present an approach to integer factorization using distributed representations formed with Vector Symbolic Architectures.
no code implementations • 8 Sep 2021 • E. Paxon Frady, Denis Kleyko, Christopher J. Kymn, Bruno A. Olshausen, Friedrich T. Sommer
By analogy to VSA, we call this new function encoding and computing framework Vector Function Architecture (VFA).
1 code implementation • 17 Jun 2021 • Cameron Diao, Denis Kleyko, Jan M. Rabaey, Bruno A. Olshausen
Machine learning algorithms deployed on edge devices must meet certain resource constraints and efficiency requirements.
no code implementations • 9 Jun 2021 • Denis Kleyko, Mike Davies, E. Paxon Frady, Pentti Kanerva, Spencer J. Kent, Bruno A. Olshausen, Evgeny Osipov, Jan M. Rabaey, Dmitri A. Rachkovskij, Abbas Rahimi, Friedrich T. Sommer
We see them acting as a framework for computing with distributed representations that can play a role of an abstraction layer for emerging computing hardware.
no code implementations • 7 Jul 2020 • E. Paxon Frady, Spencer Kent, Bruno A. Olshausen, Friedrich T. Sommer
The ability to encode and manipulate data structures with distributed neural representations could qualitatively enhance the capabilities of traditional neural networks by supporting rule-based symbolic reasoning, a central property of cognition.
1 code implementation • 9 Oct 2019 • Juexiao Zhang, Yubei Chen, Brian Cheung, Bruno A. Olshausen
Co-occurrence statistics based word embedding techniques have proved to be very useful in extracting the semantic and syntactic representation of words as low dimensional continuous vectors.
no code implementations • 19 Jun 2019 • Spencer J. Kent, E. Paxon Frady, Friedrich T. Sommer, Bruno A. Olshausen
We develop theoretical foundations of Resonator Networks, a new type of recurrent neural network introduced in Frady et al. (2020) to solve a high-dimensional vector factorization problem arising in Vector Symbolic Architectures.
no code implementations • NeurIPS 2018 • Yubei Chen, Dylan M. Paiton, Bruno A. Olshausen
We present a signal representation framework called the sparse manifold transform that combines key ideas from sparse coding, manifold learning, and slow feature analysis.
no code implementations • 26 May 2016 • Alexander G. Anderson, Cory P. Berg, Daniel P. Mossing, Bruno A. Olshausen
The other naive method that initializes the optimization for the next frame using the rendered version of the previous frame also produces poor results because the features of the texture stay fixed relative to the frame of the movie instead of moving with objects in the scene.
no code implementations • arXiv 2015 • Brian Cheung, Jesse A. Livezey, Arjun K. Bansal, Bruno A. Olshausen
Deep learning has enjoyed a great deal of success because of its ability to learnuseful features for tasks such as classification.
1 code implementation • 20 Dec 2014 • Brian Cheung, Jesse A. Livezey, Arjun K. Bansal, Bruno A. Olshausen
Deep learning has enjoyed a great deal of success because of its ability to learn useful features for tasks such as classification.
no code implementations • NeurIPS 2010 • Pierre Garrigues, Bruno A. Olshausen
We show that, due to the conjugacy of the Gamma prior, it is possible to derive efficient inference procedures for both the coefficients and the scale parameter.
no code implementations • 7 Jan 2010 • Jascha Sohl-Dickstein, Ching Ming Wang, Bruno A. Olshausen
Transformation operators are represented in their eigen-basis, reducing the computational complexity of parameter estimation to that of training a linear transformation model.
no code implementations • NeurIPS 2009 • Benjamin Culpepper, Bruno A. Olshausen
We describe a method for learning a group of continuous transformation operators to traverse smooth nonlinear manifolds.
no code implementations • NeurIPS 2007 • Pierre Garrigues, Bruno A. Olshausen
It has been shown that adapting a dictionary of basis functions to the statistics of natural images so as to maximize sparsity in the coefficients results in a set of dictionary elements whose spatial properties resemble those of V1 (primary visual cortex) receptive fields.